id
stringlengths 6
113
| author
stringlengths 2
36
| task_category
stringclasses 42
values | tags
listlengths 1
4.05k
| created_time
timestamp[ns, tz=UTC]date 2022-03-02 23:29:04
2025-04-10 08:38:38
| last_modified
stringdate 2020-05-14 13:13:12
2025-04-19 04:15:39
| downloads
int64 0
118M
| likes
int64 0
4.86k
| README
stringlengths 30
1.01M
| matched_bigbio_names
listlengths 1
8
⌀ | is_bionlp
stringclasses 3
values | model_cards
stringlengths 0
1M
| metadata
stringlengths 2
698k
| source
stringclasses 2
values | matched_task
listlengths 1
10
⌀ | __index_level_0__
int64 0
46.9k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
AryaParikh/summ_arp_org
|
AryaParikh
|
summarization
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain",
"summarization",
"en",
"dataset:Hinataaa/autotrain-data-text_summary_arp",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-03-30T08:59:01Z |
2023-03-30T09:09:14+00:00
| 21 | 1 |
---
datasets:
- Hinataaa/autotrain-data-text_summary_arp
language:
- en
tags:
- autotrain
- summarization
widget:
- text: I love AutoTrain 🤗
co2_eq_emissions:
emissions: 4.2992847624934365
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 45146113307
- CO2 Emissions (in grams): 4.2993
## Validation Metrics
- Loss: 1.285
- Rouge1: 49.529
- Rouge2: 25.404
- RougeL: 46.465
- RougeLsum: 46.645
- Gen Len: 18.803
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/Hinataaa/autotrain-text_summary_arp-45146113307
```
| null |
Non_BioNLP
|
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 45146113307
- CO2 Emissions (in grams): 4.2993
## Validation Metrics
- Loss: 1.285
- Rouge1: 49.529
- Rouge2: 25.404
- RougeL: 46.465
- RougeLsum: 46.645
- Gen Len: 18.803
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/Hinataaa/autotrain-text_summary_arp-45146113307
```
|
{"datasets": ["Hinataaa/autotrain-data-text_summary_arp"], "language": ["en"], "tags": ["autotrain", "summarization"], "widget": [{"text": "I love AutoTrain 🤗"}], "co2_eq_emissions": {"emissions": 4.2992847624934365}}
|
task
|
[
"SUMMARIZATION"
] | 42,444 |
mrplants/phrasebank-sentiment-analysis
|
mrplants
|
text-classification
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:financial_phrasebank",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-09-26T02:06:25Z |
2023-10-10T03:45:23+00:00
| 7 | 0 |
---
base_model: bert-base-uncased
datasets:
- financial_phrasebank
license: apache-2.0
metrics:
- f1
- accuracy
tags:
- generated_from_trainer
model-index:
- name: phrasebank-sentiment-analysis
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: financial_phrasebank
type: financial_phrasebank
config: sentences_50agree
split: train
args: sentences_50agree
metrics:
- type: f1
value: 0.8523902641427674
name: F1
- type: accuracy
value: 0.8658872077028886
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phrasebank-sentiment-analysis
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the financial_phrasebank dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7138
- F1: 0.8524
- Accuracy: 0.8659
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 0.1413 | 0.94 | 100 | 0.5298 | 0.8460 | 0.8652 |
| 0.057 | 1.89 | 200 | 0.7137 | 0.8354 | 0.8556 |
| 0.0399 | 2.83 | 300 | 0.7157 | 0.8375 | 0.8473 |
| 0.0279 | 3.77 | 400 | 0.7138 | 0.8524 | 0.8659 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phrasebank-sentiment-analysis
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the financial_phrasebank dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7138
- F1: 0.8524
- Accuracy: 0.8659
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 0.1413 | 0.94 | 100 | 0.5298 | 0.8460 | 0.8652 |
| 0.057 | 1.89 | 200 | 0.7137 | 0.8354 | 0.8556 |
| 0.0399 | 2.83 | 300 | 0.7157 | 0.8375 | 0.8473 |
| 0.0279 | 3.77 | 400 | 0.7138 | 0.8524 | 0.8659 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
{"base_model": "bert-base-uncased", "datasets": ["financial_phrasebank"], "license": "apache-2.0", "metrics": ["f1", "accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "phrasebank-sentiment-analysis", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "financial_phrasebank", "type": "financial_phrasebank", "config": "sentences_50agree", "split": "train", "args": "sentences_50agree"}, "metrics": [{"type": "f1", "value": 0.8523902641427674, "name": "F1"}, {"type": "accuracy", "value": 0.8658872077028886, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,445 |
srikarvar/multilingual-e5-small-triplet-final
|
srikarvar
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:546",
"loss:TripletLoss",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:intfloat/multilingual-e5-small",
"base_model:finetune:intfloat/multilingual-e5-small",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-08-20T06:14:46Z |
2024-08-20T06:15:40+00:00
| 5 | 0 |
---
base_model: intfloat/multilingual-e5-small
datasets: []
language: []
library_name: sentence-transformers
metrics:
- cosine_accuracy
- dot_accuracy
- manhattan_accuracy
- euclidean_accuracy
- max_accuracy
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:546
- loss:TripletLoss
widget:
- source_sentence: How to cook a turkey?
sentences:
- How to make a turkey sandwich?
- World's biggest desert by area
- Steps to roast a turkey
- source_sentence: What is the best way to learn a new language?
sentences:
- Author of the play 'Hamlet'
- What is the fastest way to travel?
- How can I effectively learn a new language?
- source_sentence: Who wrote 'To Kill a Mockingbird'?
sentences:
- Who wrote 'The Great Gatsby'?
- How can I effectively save money?
- Author of 'To Kill a Mockingbird'
- source_sentence: Who was the first person to climb Mount Everest?
sentences:
- Steps to visit the Great Wall of China
- Who was the first person to climb K2?
- First climber to reach the summit of Everest
- source_sentence: What is the capital city of Canada?
sentences:
- First circumnavigator of the globe
- What is the capital of Canada?
- What is the capital city of Australia?
model-index:
- name: SentenceTransformer based on intfloat/multilingual-e5-small
results:
- task:
type: triplet
name: Triplet
dataset:
name: triplet validation
type: triplet-validation
metrics:
- type: cosine_accuracy
value: 0.9836065573770492
name: Cosine Accuracy
- type: dot_accuracy
value: 0.01639344262295082
name: Dot Accuracy
- type: manhattan_accuracy
value: 0.9836065573770492
name: Manhattan Accuracy
- type: euclidean_accuracy
value: 0.9836065573770492
name: Euclidean Accuracy
- type: max_accuracy
value: 0.9836065573770492
name: Max Accuracy
---
# SentenceTransformer based on intfloat/multilingual-e5-small
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) <!-- at revision fd1525a9fd15316a2d503bf26ab031a61d056e98 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("srikarvar/multilingual-e5-small-triplet-final")
# Run inference
sentences = [
'What is the capital city of Canada?',
'What is the capital of Canada?',
'What is the capital city of Australia?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `triplet-validation`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:-------------------|:-----------|
| cosine_accuracy | 0.9836 |
| dot_accuracy | 0.0164 |
| manhattan_accuracy | 0.9836 |
| euclidean_accuracy | 0.9836 |
| **max_accuracy** | **0.9836** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 546 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 10.78 tokens</li><li>max: 22 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.52 tokens</li><li>max: 19 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.75 tokens</li><li>max: 22 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-----------------------------------------------------|:----------------------------------------------|:-------------------------------------------------------|
| <code>What is the capital of Brazil?</code> | <code>Capital city of Brazil</code> | <code>What is the capital of Argentina?</code> |
| <code>How do I install Python on my computer?</code> | <code>How do I set up Python on my PC?</code> | <code>How do I uninstall Python on my computer?</code> |
| <code>How do I apply for a credit card?</code> | <code>How do I get a credit card?</code> | <code>How do I cancel a credit card?</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 61 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 10.66 tokens</li><li>max: 16 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 9.43 tokens</li><li>max: 14 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.54 tokens</li><li>max: 17 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------|:---------------------------------------------------------|:-----------------------------------------------------|
| <code>How to create a podcast?</code> | <code>Steps to start a podcast</code> | <code>How to create a vlog?</code> |
| <code>How many states are there in the USA?</code> | <code>Total number of states in the United States</code> | <code>How many provinces are there in Canada?</code> |
| <code>What is the population of India?</code> | <code>How many people live in India?</code> | <code>What is the population of China?</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 2
- `learning_rate`: 5e-06
- `weight_decay`: 0.01
- `num_train_epochs`: 12
- `lr_scheduler_type`: cosine
- `warmup_steps`: 50
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 2
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-06
- `weight_decay`: 0.01
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 12
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 50
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | triplet-validation_max_accuracy |
|:-----------:|:-------:|:-------------:|:----------:|:-------------------------------:|
| 0.5714 | 10 | 4.9735 | - | - |
| 0.9714 | 17 | - | 4.9198 | - |
| 1.1429 | 20 | 4.9596 | - | - |
| 1.7143 | 30 | 4.9357 | - | - |
| 2.0 | 35 | - | 4.8494 | - |
| 2.2857 | 40 | 4.896 | - | - |
| 2.8571 | 50 | 4.8587 | - | - |
| 2.9714 | 52 | - | 4.7479 | - |
| 3.4286 | 60 | 4.8265 | - | - |
| 4.0 | 70 | 4.7706 | 4.6374 | - |
| 4.5714 | 80 | 4.7284 | - | - |
| 4.9714 | 87 | - | 4.5422 | - |
| 5.1429 | 90 | 4.6767 | - | - |
| 5.7143 | 100 | 4.653 | - | - |
| 6.0 | 105 | - | 4.4474 | - |
| 6.2857 | 110 | 4.6234 | - | - |
| 6.8571 | 120 | 4.5741 | - | - |
| 6.9714 | 122 | - | 4.3708 | - |
| 7.4286 | 130 | 4.5475 | - | - |
| 8.0 | 140 | 4.5206 | 4.3162 | - |
| 8.5714 | 150 | 4.517 | - | - |
| 8.9714 | 157 | - | 4.2891 | - |
| 9.1429 | 160 | 4.4587 | - | - |
| 9.7143 | 170 | 4.4879 | - | - |
| 10.0 | 175 | - | 4.2755 | - |
| 10.2857 | 180 | 4.4625 | - | - |
| 10.8571 | 190 | 4.489 | - | - |
| 10.9714 | 192 | - | 4.2716 | - |
| 11.4286 | 200 | 4.4693 | - | - |
| **11.6571** | **204** | **-** | **4.2713** | **0.9836** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.32.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on intfloat/multilingual-e5-small
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) <!-- at revision fd1525a9fd15316a2d503bf26ab031a61d056e98 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("srikarvar/multilingual-e5-small-triplet-final")
# Run inference
sentences = [
'What is the capital city of Canada?',
'What is the capital of Canada?',
'What is the capital city of Australia?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `triplet-validation`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:-------------------|:-----------|
| cosine_accuracy | 0.9836 |
| dot_accuracy | 0.0164 |
| manhattan_accuracy | 0.9836 |
| euclidean_accuracy | 0.9836 |
| **max_accuracy** | **0.9836** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 546 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 10.78 tokens</li><li>max: 22 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.52 tokens</li><li>max: 19 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.75 tokens</li><li>max: 22 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-----------------------------------------------------|:----------------------------------------------|:-------------------------------------------------------|
| <code>What is the capital of Brazil?</code> | <code>Capital city of Brazil</code> | <code>What is the capital of Argentina?</code> |
| <code>How do I install Python on my computer?</code> | <code>How do I set up Python on my PC?</code> | <code>How do I uninstall Python on my computer?</code> |
| <code>How do I apply for a credit card?</code> | <code>How do I get a credit card?</code> | <code>How do I cancel a credit card?</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 61 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 10.66 tokens</li><li>max: 16 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 9.43 tokens</li><li>max: 14 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.54 tokens</li><li>max: 17 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------|:---------------------------------------------------------|:-----------------------------------------------------|
| <code>How to create a podcast?</code> | <code>Steps to start a podcast</code> | <code>How to create a vlog?</code> |
| <code>How many states are there in the USA?</code> | <code>Total number of states in the United States</code> | <code>How many provinces are there in Canada?</code> |
| <code>What is the population of India?</code> | <code>How many people live in India?</code> | <code>What is the population of China?</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 2
- `learning_rate`: 5e-06
- `weight_decay`: 0.01
- `num_train_epochs`: 12
- `lr_scheduler_type`: cosine
- `warmup_steps`: 50
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 2
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-06
- `weight_decay`: 0.01
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 12
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 50
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | triplet-validation_max_accuracy |
|:-----------:|:-------:|:-------------:|:----------:|:-------------------------------:|
| 0.5714 | 10 | 4.9735 | - | - |
| 0.9714 | 17 | - | 4.9198 | - |
| 1.1429 | 20 | 4.9596 | - | - |
| 1.7143 | 30 | 4.9357 | - | - |
| 2.0 | 35 | - | 4.8494 | - |
| 2.2857 | 40 | 4.896 | - | - |
| 2.8571 | 50 | 4.8587 | - | - |
| 2.9714 | 52 | - | 4.7479 | - |
| 3.4286 | 60 | 4.8265 | - | - |
| 4.0 | 70 | 4.7706 | 4.6374 | - |
| 4.5714 | 80 | 4.7284 | - | - |
| 4.9714 | 87 | - | 4.5422 | - |
| 5.1429 | 90 | 4.6767 | - | - |
| 5.7143 | 100 | 4.653 | - | - |
| 6.0 | 105 | - | 4.4474 | - |
| 6.2857 | 110 | 4.6234 | - | - |
| 6.8571 | 120 | 4.5741 | - | - |
| 6.9714 | 122 | - | 4.3708 | - |
| 7.4286 | 130 | 4.5475 | - | - |
| 8.0 | 140 | 4.5206 | 4.3162 | - |
| 8.5714 | 150 | 4.517 | - | - |
| 8.9714 | 157 | - | 4.2891 | - |
| 9.1429 | 160 | 4.4587 | - | - |
| 9.7143 | 170 | 4.4879 | - | - |
| 10.0 | 175 | - | 4.2755 | - |
| 10.2857 | 180 | 4.4625 | - | - |
| 10.8571 | 190 | 4.489 | - | - |
| 10.9714 | 192 | - | 4.2716 | - |
| 11.4286 | 200 | 4.4693 | - | - |
| **11.6571** | **204** | **-** | **4.2713** | **0.9836** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.32.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "intfloat/multilingual-e5-small", "datasets": [], "language": [], "library_name": "sentence-transformers", "metrics": ["cosine_accuracy", "dot_accuracy", "manhattan_accuracy", "euclidean_accuracy", "max_accuracy"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:546", "loss:TripletLoss"], "widget": [{"source_sentence": "How to cook a turkey?", "sentences": ["How to make a turkey sandwich?", "World's biggest desert by area", "Steps to roast a turkey"]}, {"source_sentence": "What is the best way to learn a new language?", "sentences": ["Author of the play 'Hamlet'", "What is the fastest way to travel?", "How can I effectively learn a new language?"]}, {"source_sentence": "Who wrote 'To Kill a Mockingbird'?", "sentences": ["Who wrote 'The Great Gatsby'?", "How can I effectively save money?", "Author of 'To Kill a Mockingbird'"]}, {"source_sentence": "Who was the first person to climb Mount Everest?", "sentences": ["Steps to visit the Great Wall of China", "Who was the first person to climb K2?", "First climber to reach the summit of Everest"]}, {"source_sentence": "What is the capital city of Canada?", "sentences": ["First circumnavigator of the globe", "What is the capital of Canada?", "What is the capital city of Australia?"]}], "model-index": [{"name": "SentenceTransformer based on intfloat/multilingual-e5-small", "results": [{"task": {"type": "triplet", "name": "Triplet"}, "dataset": {"name": "triplet validation", "type": "triplet-validation"}, "metrics": [{"type": "cosine_accuracy", "value": 0.9836065573770492, "name": "Cosine Accuracy"}, {"type": "dot_accuracy", "value": 0.01639344262295082, "name": "Dot Accuracy"}, {"type": "manhattan_accuracy", "value": 0.9836065573770492, "name": "Manhattan Accuracy"}, {"type": "euclidean_accuracy", "value": 0.9836065573770492, "name": "Euclidean Accuracy"}, {"type": "max_accuracy", "value": 0.9836065573770492, "name": "Max Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,446 |
gaudi/opus-mt-en-pap-ctranslate2
|
gaudi
|
translation
|
[
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-07-18T15:03:00Z |
2024-10-19T00:25:19+00:00
| 6 | 0 |
---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-en-pap)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-en-pap).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-en-pap --output_dir ./ctranslate2/opus-mt-en-pap-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-en-pap-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-en-pap-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-en-pap-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-en-pap) by Helsinki-NLP.
| null |
Non_BioNLP
|
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-en-pap)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-en-pap).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-en-pap --output_dir ./ctranslate2/opus-mt-en-pap-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-en-pap-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-en-pap-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-en-pap-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-en-pap) by Helsinki-NLP.
|
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
|
task
|
[
"TRANSLATION"
] | 42,447 |
hiiamsid/autonlp-Summarization-20684328
|
hiiamsid
|
text2text-generation
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autonlp",
"es",
"dataset:hiiamsid/autonlp-data-Summarization",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2021-10-19T05:09:38+00:00
| 101 | 0 |
---
datasets:
- hiiamsid/autonlp-data-Summarization
language: es
tags:
- a
- u
- t
- o
- n
- l
- p
widget:
- text: I love AutoNLP 🤗
co2_eq_emissions: 1133.9679082840014
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 20684328
- CO2 Emissions (in grams): 1133.9679082840014
## Validation Metrics
- Loss: nan
- Rouge1: 9.4193
- Rouge2: 0.91
- RougeL: 7.9376
- RougeLsum: 8.0076
- Gen Len: 10.65
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/hiiamsid/autonlp-Summarization-20684328
```
| null |
Non_BioNLP
|
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 20684328
- CO2 Emissions (in grams): 1133.9679082840014
## Validation Metrics
- Loss: nan
- Rouge1: 9.4193
- Rouge2: 0.91
- RougeL: 7.9376
- RougeLsum: 8.0076
- Gen Len: 10.65
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/hiiamsid/autonlp-Summarization-20684328
```
|
{"datasets": ["hiiamsid/autonlp-data-Summarization"], "language": "es", "tags": ["a", "u", "t", "o", "n", "l", "p"], "widget": [{"text": "I love AutoNLP 🤗"}], "co2_eq_emissions": 1133.9679082840014}
|
task
|
[
"SUMMARIZATION"
] | 42,448 |
AlexTaylor/distilbert-base-uncased-finetuned-emotion
|
AlexTaylor
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-04-25T12:41:48Z |
2022-04-25T13:24:10+00:00
| 111 | 0 |
---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- type: accuracy
value: 0.926
name: Accuracy
- type: f1
value: 0.9263429084864518
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2257
- Accuracy: 0.926
- F1: 0.9263
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8433 | 1.0 | 250 | 0.3243 | 0.9035 | 0.8996 |
| 0.2583 | 2.0 | 500 | 0.2257 | 0.926 | 0.9263 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2257
- Accuracy: 0.926
- F1: 0.9263
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8433 | 1.0 | 250 | 0.3243 | 0.9035 | 0.8996 |
| 0.2583 | 2.0 | 500 | 0.2257 | 0.926 | 0.9263 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.926, "name": "Accuracy"}, {"type": "f1", "value": 0.9263429084864518, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,449 |
wwydmanski/specter2_pubmed-v0.4
|
wwydmanski
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:8705",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:allenai/specter2_base",
"base_model:finetune:allenai/specter2_base",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-12-06T07:24:23Z |
2024-12-06T07:24:46+00:00
| 13 | 0 |
---
base_model: allenai/specter2_base
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:8705
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Vaccine Administration in High-Risk Groups
sentences:
- '[V+: strategies improving vaccination coverage among children with chronic diseases]. '
- 'Medical writer welcomes advice on working with medical writers. '
- 'Vaccination management. '
- source_sentence: Eosinophil recruitment and STAT6 signalling pathway in nematode
infections
sentences:
- 'The roles of eotaxin and the STAT6 signalling pathway in eosinophil recruitment
and host resistance to the nematodes Nippostrongylus brasiliensis and Heligmosomoides
bakeri. '
- 'ABO blood groups from Palamau, Bihar, India. '
- 'Both stat5a and stat5b are required for antigen-induced eosinophil and T-cell
recruitment into the tissue. '
- source_sentence: Constitutional Medicine Status
sentences:
- '[Present status of constitutional medicine]. '
- 'Convergence of submodality-specific input onto neurons in primary somatosensory
cortex. '
- 'The link between health and wellbeing and constitutional recognition. '
- source_sentence: Telehealth challenges
sentences:
- '[Technological transformations and evolution of the medical practice: current
status, issues and perspectives for the development of telemedicine]. '
- 'The untapped potential of Telehealth. '
- 'Enhanced chartreusin solubility by hydroxybenzoate hydrotropy. '
- source_sentence: Kawasaki disease immunoprophylaxis
sentences:
- '[Effect of immunoglobulin in the prevention of coronary artery aneurysms in Kawasaki
disease]. '
- 'Management of Kawasaki disease. '
- 'IgA anti-epidermal transglutaminase antibodies in dermatitis herpetiformis and
pediatric celiac disease. '
---
# SentenceTransformer based on allenai/specter2_base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [allenai/specter2_base](https://huggingface.co/allenai/specter2_base) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [allenai/specter2_base](https://huggingface.co/allenai/specter2_base) <!-- at revision 3447645e1def9117997203454fa4495937bfbd83 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Kawasaki disease immunoprophylaxis',
'[Effect of immunoglobulin in the prevention of coronary artery aneurysms in Kawasaki disease]. ',
'Management of Kawasaki disease. ',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 8,705 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 7.6 tokens</li><li>max: 18 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 19.26 tokens</li><li>max: 42 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 11.72 tokens</li><li>max: 46 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------|
| <code>Telehealth challenges</code> | <code>[Technological transformations and evolution of the medical practice: current status, issues and perspectives for the development of telemedicine]. </code> | <code>The untapped potential of Telehealth. </code> |
| <code>Racial disparities in mental health treatment</code> | <code>Relationships between stigma, depression, and treatment in white and African American primary care patients. </code> | <code>Mental Health Care Disparities Now and in the Future. </code> |
| <code>Iatrogenic hyperkalemia in elderly patients with cardiovascular disease</code> | <code>Iatrogenic hyperkalemia as a serious problem in therapy of cardiovascular diseases in elderly patients. </code> | <code>The cardiovascular implications of hypokalemia. </code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `lr_scheduler_type`: cosine_with_restarts
- `warmup_ratio`: 0.1
- `bf16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: cosine_with_restarts
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.0110 | 1 | 2.9861 |
| 0.0220 | 2 | 2.9379 |
| 0.0330 | 3 | 3.0613 |
| 0.0440 | 4 | 2.8081 |
| 0.0549 | 5 | 2.6516 |
| 0.0659 | 6 | 2.3688 |
| 0.0769 | 7 | 2.0502 |
| 0.0879 | 8 | 1.7557 |
| 0.0989 | 9 | 1.5316 |
| 0.1099 | 10 | 1.2476 |
| 0.1209 | 11 | 1.1529 |
| 0.1319 | 12 | 0.9483 |
| 0.1429 | 13 | 0.7187 |
| 0.1538 | 14 | 0.6824 |
| 0.1648 | 15 | 0.593 |
| 0.1758 | 16 | 0.4593 |
| 0.1868 | 17 | 0.3737 |
| 0.1978 | 18 | 0.5082 |
| 0.2088 | 19 | 0.4232 |
| 0.2198 | 20 | 0.3089 |
| 0.2308 | 21 | 0.2057 |
| 0.2418 | 22 | 0.2358 |
| 0.2527 | 23 | 0.2291 |
| 0.2637 | 24 | 0.2707 |
| 0.2747 | 25 | 0.1359 |
| 0.2857 | 26 | 0.2294 |
| 0.2967 | 27 | 0.157 |
| 0.3077 | 28 | 0.0678 |
| 0.3187 | 29 | 0.1022 |
| 0.3297 | 30 | 0.0713 |
| 0.3407 | 31 | 0.0899 |
| 0.3516 | 32 | 0.1385 |
| 0.3626 | 33 | 0.0809 |
| 0.3736 | 34 | 0.1053 |
| 0.3846 | 35 | 0.0925 |
| 0.3956 | 36 | 0.0675 |
| 0.4066 | 37 | 0.0841 |
| 0.4176 | 38 | 0.0366 |
| 0.4286 | 39 | 0.0768 |
| 0.4396 | 40 | 0.0529 |
| 0.4505 | 41 | 0.0516 |
| 0.4615 | 42 | 0.0342 |
| 0.4725 | 43 | 0.0456 |
| 0.4835 | 44 | 0.0344 |
| 0.4945 | 45 | 0.1337 |
| 0.5055 | 46 | 0.0883 |
| 0.5165 | 47 | 0.0691 |
| 0.5275 | 48 | 0.0322 |
| 0.5385 | 49 | 0.0731 |
| 0.5495 | 50 | 0.0376 |
| 0.5604 | 51 | 0.0464 |
| 0.5714 | 52 | 0.0173 |
| 0.5824 | 53 | 0.0516 |
| 0.5934 | 54 | 0.0703 |
| 0.6044 | 55 | 0.0273 |
| 0.6154 | 56 | 0.0374 |
| 0.6264 | 57 | 0.0292 |
| 0.6374 | 58 | 0.1195 |
| 0.6484 | 59 | 0.0852 |
| 0.6593 | 60 | 0.0697 |
| 0.6703 | 61 | 0.0653 |
| 0.6813 | 62 | 0.0426 |
| 0.6923 | 63 | 0.0288 |
| 0.7033 | 64 | 0.0344 |
| 0.7143 | 65 | 0.104 |
| 0.7253 | 66 | 0.0251 |
| 0.7363 | 67 | 0.0095 |
| 0.7473 | 68 | 0.0208 |
| 0.7582 | 69 | 0.0814 |
| 0.7692 | 70 | 0.0813 |
| 0.7802 | 71 | 0.0508 |
| 0.7912 | 72 | 0.032 |
| 0.8022 | 73 | 0.0879 |
| 0.8132 | 74 | 0.095 |
| 0.8242 | 75 | 0.0932 |
| 0.8352 | 76 | 0.0868 |
| 0.8462 | 77 | 0.0231 |
| 0.8571 | 78 | 0.0144 |
| 0.8681 | 79 | 0.0179 |
| 0.8791 | 80 | 0.0457 |
| 0.8901 | 81 | 0.0935 |
| 0.9011 | 82 | 0.0658 |
| 0.9121 | 83 | 0.0553 |
| 0.9231 | 84 | 0.003 |
| 0.9341 | 85 | 0.0036 |
| 0.9451 | 86 | 0.0034 |
| 0.9560 | 87 | 0.0032 |
| 0.9670 | 88 | 0.0026 |
| 0.9780 | 89 | 0.0042 |
| 0.9890 | 90 | 0.0024 |
| 1.0 | 91 | 0.0022 |
### Framework Versions
- Python: 3.9.19
- Sentence Transformers: 3.1.1
- Transformers: 4.45.2
- PyTorch: 2.5.0
- Accelerate: 1.0.1
- Datasets: 2.19.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
BioNLP
|
# SentenceTransformer based on allenai/specter2_base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [allenai/specter2_base](https://huggingface.co/allenai/specter2_base) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [allenai/specter2_base](https://huggingface.co/allenai/specter2_base) <!-- at revision 3447645e1def9117997203454fa4495937bfbd83 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Kawasaki disease immunoprophylaxis',
'[Effect of immunoglobulin in the prevention of coronary artery aneurysms in Kawasaki disease]. ',
'Management of Kawasaki disease. ',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 8,705 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 7.6 tokens</li><li>max: 18 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 19.26 tokens</li><li>max: 42 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 11.72 tokens</li><li>max: 46 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------|
| <code>Telehealth challenges</code> | <code>[Technological transformations and evolution of the medical practice: current status, issues and perspectives for the development of telemedicine]. </code> | <code>The untapped potential of Telehealth. </code> |
| <code>Racial disparities in mental health treatment</code> | <code>Relationships between stigma, depression, and treatment in white and African American primary care patients. </code> | <code>Mental Health Care Disparities Now and in the Future. </code> |
| <code>Iatrogenic hyperkalemia in elderly patients with cardiovascular disease</code> | <code>Iatrogenic hyperkalemia as a serious problem in therapy of cardiovascular diseases in elderly patients. </code> | <code>The cardiovascular implications of hypokalemia. </code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `lr_scheduler_type`: cosine_with_restarts
- `warmup_ratio`: 0.1
- `bf16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: cosine_with_restarts
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.0110 | 1 | 2.9861 |
| 0.0220 | 2 | 2.9379 |
| 0.0330 | 3 | 3.0613 |
| 0.0440 | 4 | 2.8081 |
| 0.0549 | 5 | 2.6516 |
| 0.0659 | 6 | 2.3688 |
| 0.0769 | 7 | 2.0502 |
| 0.0879 | 8 | 1.7557 |
| 0.0989 | 9 | 1.5316 |
| 0.1099 | 10 | 1.2476 |
| 0.1209 | 11 | 1.1529 |
| 0.1319 | 12 | 0.9483 |
| 0.1429 | 13 | 0.7187 |
| 0.1538 | 14 | 0.6824 |
| 0.1648 | 15 | 0.593 |
| 0.1758 | 16 | 0.4593 |
| 0.1868 | 17 | 0.3737 |
| 0.1978 | 18 | 0.5082 |
| 0.2088 | 19 | 0.4232 |
| 0.2198 | 20 | 0.3089 |
| 0.2308 | 21 | 0.2057 |
| 0.2418 | 22 | 0.2358 |
| 0.2527 | 23 | 0.2291 |
| 0.2637 | 24 | 0.2707 |
| 0.2747 | 25 | 0.1359 |
| 0.2857 | 26 | 0.2294 |
| 0.2967 | 27 | 0.157 |
| 0.3077 | 28 | 0.0678 |
| 0.3187 | 29 | 0.1022 |
| 0.3297 | 30 | 0.0713 |
| 0.3407 | 31 | 0.0899 |
| 0.3516 | 32 | 0.1385 |
| 0.3626 | 33 | 0.0809 |
| 0.3736 | 34 | 0.1053 |
| 0.3846 | 35 | 0.0925 |
| 0.3956 | 36 | 0.0675 |
| 0.4066 | 37 | 0.0841 |
| 0.4176 | 38 | 0.0366 |
| 0.4286 | 39 | 0.0768 |
| 0.4396 | 40 | 0.0529 |
| 0.4505 | 41 | 0.0516 |
| 0.4615 | 42 | 0.0342 |
| 0.4725 | 43 | 0.0456 |
| 0.4835 | 44 | 0.0344 |
| 0.4945 | 45 | 0.1337 |
| 0.5055 | 46 | 0.0883 |
| 0.5165 | 47 | 0.0691 |
| 0.5275 | 48 | 0.0322 |
| 0.5385 | 49 | 0.0731 |
| 0.5495 | 50 | 0.0376 |
| 0.5604 | 51 | 0.0464 |
| 0.5714 | 52 | 0.0173 |
| 0.5824 | 53 | 0.0516 |
| 0.5934 | 54 | 0.0703 |
| 0.6044 | 55 | 0.0273 |
| 0.6154 | 56 | 0.0374 |
| 0.6264 | 57 | 0.0292 |
| 0.6374 | 58 | 0.1195 |
| 0.6484 | 59 | 0.0852 |
| 0.6593 | 60 | 0.0697 |
| 0.6703 | 61 | 0.0653 |
| 0.6813 | 62 | 0.0426 |
| 0.6923 | 63 | 0.0288 |
| 0.7033 | 64 | 0.0344 |
| 0.7143 | 65 | 0.104 |
| 0.7253 | 66 | 0.0251 |
| 0.7363 | 67 | 0.0095 |
| 0.7473 | 68 | 0.0208 |
| 0.7582 | 69 | 0.0814 |
| 0.7692 | 70 | 0.0813 |
| 0.7802 | 71 | 0.0508 |
| 0.7912 | 72 | 0.032 |
| 0.8022 | 73 | 0.0879 |
| 0.8132 | 74 | 0.095 |
| 0.8242 | 75 | 0.0932 |
| 0.8352 | 76 | 0.0868 |
| 0.8462 | 77 | 0.0231 |
| 0.8571 | 78 | 0.0144 |
| 0.8681 | 79 | 0.0179 |
| 0.8791 | 80 | 0.0457 |
| 0.8901 | 81 | 0.0935 |
| 0.9011 | 82 | 0.0658 |
| 0.9121 | 83 | 0.0553 |
| 0.9231 | 84 | 0.003 |
| 0.9341 | 85 | 0.0036 |
| 0.9451 | 86 | 0.0034 |
| 0.9560 | 87 | 0.0032 |
| 0.9670 | 88 | 0.0026 |
| 0.9780 | 89 | 0.0042 |
| 0.9890 | 90 | 0.0024 |
| 1.0 | 91 | 0.0022 |
### Framework Versions
- Python: 3.9.19
- Sentence Transformers: 3.1.1
- Transformers: 4.45.2
- PyTorch: 2.5.0
- Accelerate: 1.0.1
- Datasets: 2.19.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "allenai/specter2_base", "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:8705", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "Vaccine Administration in High-Risk Groups", "sentences": ["[V+: strategies improving vaccination coverage among children with chronic diseases]. ", "Medical writer welcomes advice on working with medical writers. ", "Vaccination management. "]}, {"source_sentence": "Eosinophil recruitment and STAT6 signalling pathway in nematode infections", "sentences": ["The roles of eotaxin and the STAT6 signalling pathway in eosinophil recruitment and host resistance to the nematodes Nippostrongylus brasiliensis and Heligmosomoides bakeri. ", "ABO blood groups from Palamau, Bihar, India. ", "Both stat5a and stat5b are required for antigen-induced eosinophil and T-cell recruitment into the tissue. "]}, {"source_sentence": "Constitutional Medicine Status", "sentences": ["[Present status of constitutional medicine]. ", "Convergence of submodality-specific input onto neurons in primary somatosensory cortex. ", "The link between health and wellbeing and constitutional recognition. "]}, {"source_sentence": "Telehealth challenges", "sentences": ["[Technological transformations and evolution of the medical practice: current status, issues and perspectives for the development of telemedicine]. ", "The untapped potential of Telehealth. ", "Enhanced chartreusin solubility by hydroxybenzoate hydrotropy. "]}, {"source_sentence": "Kawasaki disease immunoprophylaxis", "sentences": ["[Effect of immunoglobulin in the prevention of coronary artery aneurysms in Kawasaki disease]. ", "Management of Kawasaki disease. ", "IgA anti-epidermal transglutaminase antibodies in dermatitis herpetiformis and pediatric celiac disease. "]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,450 |
pensuke/distilbert-base-uncased-finetuned-clinc
|
pensuke
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-10-13T10:37:24Z |
2023-10-14T02:56:03+00:00
| 80 | 0 |
---
datasets:
- clinc_oos
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- type: accuracy
value: 0.9161290322580645
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7754
- Accuracy: 0.9161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2831 | 0.7397 |
| 3.7864 | 2.0 | 636 | 1.8731 | 0.8345 |
| 3.7864 | 3.0 | 954 | 1.1580 | 0.89 |
| 1.6939 | 4.0 | 1272 | 0.8584 | 0.9077 |
| 0.9053 | 5.0 | 1590 | 0.7754 | 0.9161 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.14.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7754
- Accuracy: 0.9161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2831 | 0.7397 |
| 3.7864 | 2.0 | 636 | 1.8731 | 0.8345 |
| 3.7864 | 3.0 | 954 | 1.1580 | 0.89 |
| 1.6939 | 4.0 | 1272 | 0.8584 | 0.9077 |
| 0.9053 | 5.0 | 1590 | 0.7754 | 0.9161 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.14.1
|
{"datasets": ["clinc_oos"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-clinc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos", "args": "plus"}, "metrics": [{"type": "accuracy", "value": 0.9161290322580645, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,451 |
martimfasantos/gemma-2-2b-it-MT-SimPO
|
martimfasantos
|
translation
|
[
"safetensors",
"gemma2",
"xcomet_xl_xxl",
"generated_from_trainer",
"translation",
"dataset:sardinelab/MT-pref",
"base_model:martimfasantos/gemma-2-2b-it-MT-SFT",
"base_model:finetune:martimfasantos/gemma-2-2b-it-MT-SFT",
"license:gemma",
"region:us"
] | 2024-10-02T20:39:15Z |
2024-12-30T01:31:00+00:00
| 12 | 0 |
---
base_model: martimfasantos/gemma-2-2b-it-MT-SFT
datasets:
- sardinelab/MT-pref
license: gemma
pipeline_tag: translation
tags:
- xcomet_xl_xxl
- generated_from_trainer
model-index:
- name: gemma-2-2b-it-MT-SimPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-2-2b-it-MT-SimPO
This model is a fine-tuned version of [martimfasantos/gemma-2-2b-it-MT-SFT](https://huggingface.co/martimfasantos/gemma-2-2b-it-MT-SFT) on the sardinelab/MT-pref dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 1
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.43.3
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-2-2b-it-MT-SimPO
This model is a fine-tuned version of [martimfasantos/gemma-2-2b-it-MT-SFT](https://huggingface.co/martimfasantos/gemma-2-2b-it-MT-SFT) on the sardinelab/MT-pref dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 1
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.43.3
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
{"base_model": "martimfasantos/gemma-2-2b-it-MT-SFT", "datasets": ["sardinelab/MT-pref"], "license": "gemma", "pipeline_tag": "translation", "tags": ["xcomet_xl_xxl", "generated_from_trainer"], "model-index": [{"name": "gemma-2-2b-it-MT-SimPO", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 42,452 |
Babelscape/cner-base
|
Babelscape
|
token-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"token-classification",
"named-entity-recognition",
"sequence-tagger-model",
"en",
"dataset:Babelscape/cner",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-04-10T07:55:44Z |
2024-06-17T22:22:17+00:00
| 1,309 | 4 |
---
datasets:
- Babelscape/cner
language:
- en
tags:
- named-entity-recognition
- sequence-tagger-model
annotations_creators:
- machine-generated
language_creators:
- machine-generated
widget:
- text: George Washington went to Washington.
- text: What is the seventh tallest mountain in North America?
pretty_name: cner-model
source_datasets:
- original
task_categories:
- structure-prediction
task_ids:
- named-entity-recognition
---
# CNER: Concept and Named Entity Recognition
This is the model card for the NAACL 2024 paper [CNER: Concept and Named Entity Recognition](https://aclanthology.org/2024.naacl-long.461/).
We fine-tuned a language model (DeBERTa-v3-base) for 1 epoch on our [CNER dataset](https://huggingface.co/datasets/Babelscape/cner) using the default hyperparameters, optimizer and architecture of Hugging Face, therefore the results of this model may differ from the ones presented in the paper.
The resulting CNER model is able to jointly identifying and classifying concepts and named entities with fine-grained tags.
**If you use the model, please reference this work in your paper**:
```bibtex
@inproceedings{martinelli-etal-2024-cner,
title = "{CNER}: Concept and Named Entity Recognition",
author = "Martinelli, Giuliano and
Molfese, Francesco and
Tedeschi, Simone and
Fern{\'a}ndez-Castro, Alberte and
Navigli, Roberto",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.461",
pages = "8329--8344",
}
```
The original repository for the paper can be found at [https://github.com/Babelscape/cner](https://github.com/Babelscape/cner).
## How to use
You can use this model with Transformers NER *pipeline*.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Babelscape/cner-model")
model = AutoModelForTokenClassification.from_pretrained("Babelscape/cner-model")
nlp = pipeline("ner", model=model, tokenizer=tokenizer, grouped_entities=True)
example = "What is the seventh tallest mountain in North America?"
ner_results = nlp(example)
print(ner_results)
```
## Classes
<img src="https://hf.fast360.xyz/production/uploads/65e9ccd84ce78d665a50f78b/2K3NZ79go3Zjf3qFeHO0O.png" alt="drawing" />
## Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents and models belongs to the original copyright holders.
`microsoft/deberta-v3-base` is released under the [MIT license](https://choosealicense.com/licenses/mit/).
| null |
Non_BioNLP
|
# CNER: Concept and Named Entity Recognition
This is the model card for the NAACL 2024 paper [CNER: Concept and Named Entity Recognition](https://aclanthology.org/2024.naacl-long.461/).
We fine-tuned a language model (DeBERTa-v3-base) for 1 epoch on our [CNER dataset](https://huggingface.co/datasets/Babelscape/cner) using the default hyperparameters, optimizer and architecture of Hugging Face, therefore the results of this model may differ from the ones presented in the paper.
The resulting CNER model is able to jointly identifying and classifying concepts and named entities with fine-grained tags.
**If you use the model, please reference this work in your paper**:
```bibtex
@inproceedings{martinelli-etal-2024-cner,
title = "{CNER}: Concept and Named Entity Recognition",
author = "Martinelli, Giuliano and
Molfese, Francesco and
Tedeschi, Simone and
Fern{\'a}ndez-Castro, Alberte and
Navigli, Roberto",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.461",
pages = "8329--8344",
}
```
The original repository for the paper can be found at [https://github.com/Babelscape/cner](https://github.com/Babelscape/cner).
## How to use
You can use this model with Transformers NER *pipeline*.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Babelscape/cner-model")
model = AutoModelForTokenClassification.from_pretrained("Babelscape/cner-model")
nlp = pipeline("ner", model=model, tokenizer=tokenizer, grouped_entities=True)
example = "What is the seventh tallest mountain in North America?"
ner_results = nlp(example)
print(ner_results)
```
## Classes
<img src="https://hf.fast360.xyz/production/uploads/65e9ccd84ce78d665a50f78b/2K3NZ79go3Zjf3qFeHO0O.png" alt="drawing" />
## Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents and models belongs to the original copyright holders.
`microsoft/deberta-v3-base` is released under the [MIT license](https://choosealicense.com/licenses/mit/).
|
{"datasets": ["Babelscape/cner"], "language": ["en"], "tags": ["named-entity-recognition", "sequence-tagger-model"], "annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "widget": [{"text": "George Washington went to Washington."}, {"text": "What is the seventh tallest mountain in North America?"}], "pretty_name": "cner-model", "source_datasets": ["original"], "task_categories": ["structure-prediction"], "task_ids": ["named-entity-recognition"]}
|
task
|
[
"NAMED_ENTITY_RECOGNITION"
] | 42,453 |
snowian/emotion
|
snowian
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-07-02T08:08:39Z |
2024-07-02T08:09:18+00:00
| 90 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.9285
name: Accuracy
- type: f1
value: 0.92867427809199
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1702
- Accuracy: 0.9285
- F1: 0.9287
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8067 | 1.0 | 250 | 0.2883 | 0.9115 | 0.9115 |
| 0.2204 | 2.0 | 500 | 0.1883 | 0.9295 | 0.9299 |
| 0.1495 | 3.0 | 750 | 0.1702 | 0.9285 | 0.9287 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1702
- Accuracy: 0.9285
- F1: 0.9287
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8067 | 1.0 | 250 | 0.2883 | 0.9115 | 0.9115 |
| 0.2204 | 2.0 | 500 | 0.1883 | 0.9295 | 0.9299 |
| 0.1495 | 3.0 | 750 | 0.1702 | 0.9285 | 0.9287 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
{"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.9285, "name": "Accuracy"}, {"type": "f1", "value": 0.92867427809199, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,454 |
Helsinki-NLP/opus-mt-uk-he
|
Helsinki-NLP
|
translation
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"uk",
"he",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2023-08-16T12:08:08+00:00
| 24 | 0 |
---
language:
- uk
- he
license: apache-2.0
tags:
- translation
---
### ukr-heb
* source group: Ukrainian
* target group: Hebrew
* OPUS readme: [ukr-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-heb/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): heb
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-heb/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-heb/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-heb/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.heb | 35.7 | 0.557 |
### System Info:
- hf_name: ukr-heb
- source_languages: ukr
- target_languages: heb
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-heb/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'he']
- src_constituents: {'ukr'}
- tgt_constituents: {'heb'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-heb/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-heb/opus-2020-06-17.test.txt
- src_alpha3: ukr
- tgt_alpha3: heb
- short_pair: uk-he
- chrF2_score: 0.557
- bleu: 35.7
- brevity_penalty: 1.0
- ref_len: 4765.0
- src_name: Ukrainian
- tgt_name: Hebrew
- train_date: 2020-06-17
- src_alpha2: uk
- tgt_alpha2: he
- prefer_old: False
- long_pair: ukr-heb
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
| null |
Non_BioNLP
|
### ukr-heb
* source group: Ukrainian
* target group: Hebrew
* OPUS readme: [ukr-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-heb/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): heb
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-heb/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-heb/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-heb/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.heb | 35.7 | 0.557 |
### System Info:
- hf_name: ukr-heb
- source_languages: ukr
- target_languages: heb
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-heb/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'he']
- src_constituents: {'ukr'}
- tgt_constituents: {'heb'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-heb/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-heb/opus-2020-06-17.test.txt
- src_alpha3: ukr
- tgt_alpha3: heb
- short_pair: uk-he
- chrF2_score: 0.557
- bleu: 35.7
- brevity_penalty: 1.0
- ref_len: 4765.0
- src_name: Ukrainian
- tgt_name: Hebrew
- train_date: 2020-06-17
- src_alpha2: uk
- tgt_alpha2: he
- prefer_old: False
- long_pair: ukr-heb
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
{"language": ["uk", "he"], "license": "apache-2.0", "tags": ["translation"]}
|
task
|
[
"TRANSLATION"
] | 42,455 |
fernandoruiz/madlad400-3b-mt-Q4_K_S-GGUF
|
fernandoruiz
|
translation
|
[
"transformers",
"gguf",
"text2text-generation",
"text-generation-inference",
"llama-cpp",
"gguf-my-repo",
"translation",
"multilingual",
"en",
"ru",
"es",
"fr",
"de",
"it",
"pt",
"pl",
"nl",
"vi",
"tr",
"sv",
"id",
"ro",
"cs",
"zh",
"hu",
"ja",
"th",
"fi",
"fa",
"uk",
"da",
"el",
"no",
"bg",
"sk",
"ko",
"ar",
"lt",
"ca",
"sl",
"he",
"et",
"lv",
"hi",
"sq",
"ms",
"az",
"sr",
"ta",
"hr",
"kk",
"is",
"ml",
"mr",
"te",
"af",
"gl",
"fil",
"be",
"mk",
"eu",
"bn",
"ka",
"mn",
"bs",
"uz",
"ur",
"sw",
"yue",
"ne",
"kn",
"kaa",
"gu",
"si",
"cy",
"eo",
"la",
"hy",
"ky",
"tg",
"ga",
"mt",
"my",
"km",
"tt",
"so",
"ku",
"ps",
"pa",
"rw",
"lo",
"ha",
"dv",
"fy",
"lb",
"ckb",
"mg",
"gd",
"am",
"ug",
"ht",
"grc",
"hmn",
"sd",
"jv",
"mi",
"tk",
"ceb",
"yi",
"ba",
"fo",
"or",
"xh",
"su",
"kl",
"ny",
"sm",
"sn",
"co",
"zu",
"ig",
"yo",
"pap",
"st",
"haw",
"as",
"oc",
"cv",
"lus",
"tet",
"gsw",
"sah",
"br",
"rm",
"sa",
"bo",
"om",
"se",
"ce",
"cnh",
"ilo",
"hil",
"udm",
"os",
"lg",
"ti",
"vec",
"ts",
"tyv",
"kbd",
"ee",
"iba",
"av",
"kha",
"to",
"tn",
"nso",
"fj",
"zza",
"ak",
"ada",
"otq",
"dz",
"bua",
"cfm",
"ln",
"chm",
"gn",
"krc",
"wa",
"hif",
"yua",
"srn",
"war",
"rom",
"bik",
"pam",
"sg",
"lu",
"ady",
"kbp",
"syr",
"ltg",
"myv",
"iso",
"kac",
"bho",
"ay",
"kum",
"qu",
"za",
"pag",
"ngu",
"ve",
"pck",
"zap",
"tyz",
"hui",
"bbc",
"tzo",
"tiv",
"ksd",
"gom",
"min",
"ang",
"nhe",
"bgp",
"nzi",
"nnb",
"nv",
"zxx",
"bci",
"kv",
"new",
"mps",
"alt",
"meu",
"bew",
"fon",
"iu",
"abt",
"mgh",
"mnw",
"tvl",
"dov",
"tlh",
"ho",
"kw",
"mrj",
"meo",
"crh",
"mbt",
"emp",
"ace",
"ium",
"mam",
"gym",
"mai",
"crs",
"pon",
"ubu",
"fip",
"quc",
"gv",
"kj",
"btx",
"ape",
"chk",
"rcf",
"shn",
"tzh",
"mdf",
"ppk",
"ss",
"gag",
"cab",
"kri",
"seh",
"ibb",
"tbz",
"bru",
"enq",
"ach",
"cuk",
"kmb",
"wo",
"kek",
"qub",
"tab",
"bts",
"kos",
"rwo",
"cak",
"tuc",
"bum",
"cjk",
"gil",
"stq",
"tsg",
"quh",
"mak",
"arn",
"ban",
"jiv",
"sja",
"yap",
"tcy",
"toj",
"twu",
"xal",
"amu",
"rmc",
"hus",
"nia",
"kjh",
"bm",
"guh",
"mas",
"acf",
"dtp",
"ksw",
"bzj",
"din",
"zne",
"mad",
"msi",
"mag",
"mkn",
"kg",
"lhu",
"ch",
"qvi",
"mh",
"djk",
"sus",
"mfe",
"srm",
"dyu",
"ctu",
"gui",
"pau",
"inb",
"bi",
"mni",
"guc",
"jam",
"wal",
"jac",
"bas",
"gor",
"skr",
"nyu",
"noa",
"sda",
"gub",
"nog",
"cni",
"teo",
"tdx",
"sxn",
"rki",
"nr",
"frp",
"alz",
"taj",
"lrc",
"cce",
"rn",
"jvn",
"hvn",
"nij",
"dwr",
"izz",
"msm",
"bus",
"ktu",
"chr",
"maz",
"tzj",
"suz",
"knj",
"bim",
"gvl",
"bqc",
"tca",
"pis",
"prk",
"laj",
"mel",
"qxr",
"niq",
"ahk",
"shp",
"hne",
"spp",
"koi",
"krj",
"quf",
"luz",
"agr",
"tsc",
"mqy",
"gof",
"gbm",
"miq",
"dje",
"awa",
"bjj",
"qvz",
"sjp",
"tll",
"raj",
"kjg",
"bgz",
"quy",
"cbk",
"akb",
"oj",
"ify",
"mey",
"ks",
"cac",
"brx",
"qup",
"syl",
"jax",
"ff",
"ber",
"tks",
"trp",
"mrw",
"adh",
"smt",
"srr",
"ffm",
"qvc",
"mtr",
"ann",
"aa",
"noe",
"nut",
"gyn",
"kwi",
"xmm",
"msb",
"dataset:allenai/MADLAD-400",
"base_model:google/madlad400-3b-mt",
"base_model:quantized:google/madlad400-3b-mt",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-07-13T20:33:07Z |
2024-07-13T20:33:16+00:00
| 6 | 0 |
---
base_model: google/madlad400-3b-mt
datasets:
- allenai/MADLAD-400
language:
- multilingual
- en
- ru
- es
- fr
- de
- it
- pt
- pl
- nl
- vi
- tr
- sv
- id
- ro
- cs
- zh
- hu
- ja
- th
- fi
- fa
- uk
- da
- el
- 'no'
- bg
- sk
- ko
- ar
- lt
- ca
- sl
- he
- et
- lv
- hi
- sq
- ms
- az
- sr
- ta
- hr
- kk
- is
- ml
- mr
- te
- af
- gl
- fil
- be
- mk
- eu
- bn
- ka
- mn
- bs
- uz
- ur
- sw
- yue
- ne
- kn
- kaa
- gu
- si
- cy
- eo
- la
- hy
- ky
- tg
- ga
- mt
- my
- km
- tt
- so
- ku
- ps
- pa
- rw
- lo
- ha
- dv
- fy
- lb
- ckb
- mg
- gd
- am
- ug
- ht
- grc
- hmn
- sd
- jv
- mi
- tk
- ceb
- yi
- ba
- fo
- or
- xh
- su
- kl
- ny
- sm
- sn
- co
- zu
- ig
- yo
- pap
- st
- haw
- as
- oc
- cv
- lus
- tet
- gsw
- sah
- br
- rm
- sa
- bo
- om
- se
- ce
- cnh
- ilo
- hil
- udm
- os
- lg
- ti
- vec
- ts
- tyv
- kbd
- ee
- iba
- av
- kha
- to
- tn
- nso
- fj
- zza
- ak
- ada
- otq
- dz
- bua
- cfm
- ln
- chm
- gn
- krc
- wa
- hif
- yua
- srn
- war
- rom
- bik
- pam
- sg
- lu
- ady
- kbp
- syr
- ltg
- myv
- iso
- kac
- bho
- ay
- kum
- qu
- za
- pag
- ngu
- ve
- pck
- zap
- tyz
- hui
- bbc
- tzo
- tiv
- ksd
- gom
- min
- ang
- nhe
- bgp
- nzi
- nnb
- nv
- zxx
- bci
- kv
- new
- mps
- alt
- meu
- bew
- fon
- iu
- abt
- mgh
- mnw
- tvl
- dov
- tlh
- ho
- kw
- mrj
- meo
- crh
- mbt
- emp
- ace
- ium
- mam
- gym
- mai
- crs
- pon
- ubu
- fip
- quc
- gv
- kj
- btx
- ape
- chk
- rcf
- shn
- tzh
- mdf
- ppk
- ss
- gag
- cab
- kri
- seh
- ibb
- tbz
- bru
- enq
- ach
- cuk
- kmb
- wo
- kek
- qub
- tab
- bts
- kos
- rwo
- cak
- tuc
- bum
- cjk
- gil
- stq
- tsg
- quh
- mak
- arn
- ban
- jiv
- sja
- yap
- tcy
- toj
- twu
- xal
- amu
- rmc
- hus
- nia
- kjh
- bm
- guh
- mas
- acf
- dtp
- ksw
- bzj
- din
- zne
- mad
- msi
- mag
- mkn
- kg
- lhu
- ch
- qvi
- mh
- djk
- sus
- mfe
- srm
- dyu
- ctu
- gui
- pau
- inb
- bi
- mni
- guc
- jam
- wal
- jac
- bas
- gor
- skr
- nyu
- noa
- sda
- gub
- nog
- cni
- teo
- tdx
- sxn
- rki
- nr
- frp
- alz
- taj
- lrc
- cce
- rn
- jvn
- hvn
- nij
- dwr
- izz
- msm
- bus
- ktu
- chr
- maz
- tzj
- suz
- knj
- bim
- gvl
- bqc
- tca
- pis
- prk
- laj
- mel
- qxr
- niq
- ahk
- shp
- hne
- spp
- koi
- krj
- quf
- luz
- agr
- tsc
- mqy
- gof
- gbm
- miq
- dje
- awa
- bjj
- qvz
- sjp
- tll
- raj
- kjg
- bgz
- quy
- cbk
- akb
- oj
- ify
- mey
- ks
- cac
- brx
- qup
- syl
- jax
- ff
- ber
- tks
- trp
- mrw
- adh
- smt
- srr
- ffm
- qvc
- mtr
- ann
- kaa
- aa
- noe
- nut
- gyn
- kwi
- xmm
- msb
library_name: transformers
license: apache-2.0
pipeline_tag: translation
tags:
- text2text-generation
- text-generation-inference
- llama-cpp
- gguf-my-repo
widget:
- text: <2en> Como vai, amigo?
example_title: Translation to English
- text: <2de> Do you speak German?
example_title: Translation to German
---
# fernandoruiz/madlad400-3b-mt-Q4_K_S-GGUF
This model was converted to GGUF format from [`google/madlad400-3b-mt`](https://huggingface.co/google/madlad400-3b-mt) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/google/madlad400-3b-mt) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo fernandoruiz/madlad400-3b-mt-Q4_K_S-GGUF --hf-file madlad400-3b-mt-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo fernandoruiz/madlad400-3b-mt-Q4_K_S-GGUF --hf-file madlad400-3b-mt-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo fernandoruiz/madlad400-3b-mt-Q4_K_S-GGUF --hf-file madlad400-3b-mt-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo fernandoruiz/madlad400-3b-mt-Q4_K_S-GGUF --hf-file madlad400-3b-mt-q4_k_s.gguf -c 2048
```
| null |
Non_BioNLP
|
# fernandoruiz/madlad400-3b-mt-Q4_K_S-GGUF
This model was converted to GGUF format from [`google/madlad400-3b-mt`](https://huggingface.co/google/madlad400-3b-mt) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/google/madlad400-3b-mt) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo fernandoruiz/madlad400-3b-mt-Q4_K_S-GGUF --hf-file madlad400-3b-mt-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo fernandoruiz/madlad400-3b-mt-Q4_K_S-GGUF --hf-file madlad400-3b-mt-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo fernandoruiz/madlad400-3b-mt-Q4_K_S-GGUF --hf-file madlad400-3b-mt-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo fernandoruiz/madlad400-3b-mt-Q4_K_S-GGUF --hf-file madlad400-3b-mt-q4_k_s.gguf -c 2048
```
|
{"base_model": "google/madlad400-3b-mt", "datasets": ["allenai/MADLAD-400"], "language": ["multilingual", "en", "ru", "es", "fr", "de", "it", "pt", "pl", "nl", "vi", "tr", "sv", "id", "ro", "cs", "zh", "hu", "ja", "th", "fi", "fa", "uk", "da", "el", "no", "bg", "sk", "ko", "ar", "lt", "ca", "sl", "he", "et", "lv", "hi", "sq", "ms", "az", "sr", "ta", "hr", "kk", "is", "ml", "mr", "te", "af", "gl", "fil", "be", "mk", "eu", "bn", "ka", "mn", "bs", "uz", "ur", "sw", "yue", "ne", "kn", "kaa", "gu", "si", "cy", "eo", "la", "hy", "ky", "tg", "ga", "mt", "my", "km", "tt", "so", "ku", "ps", "pa", "rw", "lo", "ha", "dv", "fy", "lb", "ckb", "mg", "gd", "am", "ug", "ht", "grc", "hmn", "sd", "jv", "mi", "tk", "ceb", "yi", "ba", "fo", "or", "xh", "su", "kl", "ny", "sm", "sn", "co", "zu", "ig", "yo", "pap", "st", "haw", "as", "oc", "cv", "lus", "tet", "gsw", "sah", "br", "rm", "sa", "bo", "om", "se", "ce", "cnh", "ilo", "hil", "udm", "os", "lg", "ti", "vec", "ts", "tyv", "kbd", "ee", "iba", "av", "kha", "to", "tn", "nso", "fj", "zza", "ak", "ada", "otq", "dz", "bua", "cfm", "ln", "chm", "gn", "krc", "wa", "hif", "yua", "srn", "war", "rom", "bik", "pam", "sg", "lu", "ady", "kbp", "syr", "ltg", "myv", "iso", "kac", "bho", "ay", "kum", "qu", "za", "pag", "ngu", "ve", "pck", "zap", "tyz", "hui", "bbc", "tzo", "tiv", "ksd", "gom", "min", "ang", "nhe", "bgp", "nzi", "nnb", "nv", "zxx", "bci", "kv", "new", "mps", "alt", "meu", "bew", "fon", "iu", "abt", "mgh", "mnw", "tvl", "dov", "tlh", "ho", "kw", "mrj", "meo", "crh", "mbt", "emp", "ace", "ium", "mam", "gym", "mai", "crs", "pon", "ubu", "fip", "quc", "gv", "kj", "btx", "ape", "chk", "rcf", "shn", "tzh", "mdf", "ppk", "ss", "gag", "cab", "kri", "seh", "ibb", "tbz", "bru", "enq", "ach", "cuk", "kmb", "wo", "kek", "qub", "tab", "bts", "kos", "rwo", "cak", "tuc", "bum", "cjk", "gil", "stq", "tsg", "quh", "mak", "arn", "ban", "jiv", "sja", "yap", "tcy", "toj", "twu", "xal", "amu", "rmc", "hus", "nia", "kjh", "bm", "guh", "mas", "acf", "dtp", "ksw", "bzj", "din", "zne", "mad", "msi", "mag", "mkn", "kg", "lhu", "ch", "qvi", "mh", "djk", "sus", "mfe", "srm", "dyu", "ctu", "gui", "pau", "inb", "bi", "mni", "guc", "jam", "wal", "jac", "bas", "gor", "skr", "nyu", "noa", "sda", "gub", "nog", "cni", "teo", "tdx", "sxn", "rki", "nr", "frp", "alz", "taj", "lrc", "cce", "rn", "jvn", "hvn", "nij", "dwr", "izz", "msm", "bus", "ktu", "chr", "maz", "tzj", "suz", "knj", "bim", "gvl", "bqc", "tca", "pis", "prk", "laj", "mel", "qxr", "niq", "ahk", "shp", "hne", "spp", "koi", "krj", "quf", "luz", "agr", "tsc", "mqy", "gof", "gbm", "miq", "dje", "awa", "bjj", "qvz", "sjp", "tll", "raj", "kjg", "bgz", "quy", "cbk", "akb", "oj", "ify", "mey", "ks", "cac", "brx", "qup", "syl", "jax", "ff", "ber", "tks", "trp", "mrw", "adh", "smt", "srr", "ffm", "qvc", "mtr", "ann", "kaa", "aa", "noe", "nut", "gyn", "kwi", "xmm", "msb"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "translation", "tags": ["text2text-generation", "text-generation-inference", "llama-cpp", "gguf-my-repo"], "widget": [{"text": "<2en> Como vai, amigo?", "example_title": "Translation to English"}, {"text": "<2de> Do you speak German?", "example_title": "Translation to German"}]}
|
task
|
[
"TRANSLATION"
] | 42,456 |
shabieh2/Llama-3.2-1B-Mental-Health-Sentiment
|
shabieh2
|
text-generation
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"facebook",
"meta",
"llama-3",
"conversational",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"arxiv:2204.05149",
"arxiv:2405.16406",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2025-03-13T02:10:35Z |
2025-03-13T02:12:40+00:00
| 37 | 0 |
---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
license: llama3.2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\
\ Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions\
\ for use, reproduction, distribution and modification of the Llama Materials set\
\ forth herein.\n\n“Documentation” means the specifications, manuals and documentation\
\ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\
\n“Licensee” or “you” means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf),\
\ of the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2”\
\ means the foundational large language models and software and algorithms, including\
\ machine-learning model code, trained model weights, inference-enabling code, training-enabling\
\ code, fine-tuning enabling code and other elements of the foregoing distributed\
\ by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means,\
\ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\
\ thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms\
\ Ireland Limited (if you are located in or, if you are an entity, your principal\
\ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\
\ you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept”\
\ below or by using or distributing any portion or element of the Llama Materials,\
\ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\
\ and royalty-free limited license under Meta’s intellectual property or other rights\
\ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\
\ copy, create derivative works of, and make modifications to the Llama Materials.\
\ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\
\ Materials (or any derivative works thereof), or a product or service (including\
\ another AI model) that contains any of them, you shall (A) provide a copy of this\
\ Agreement with any such Llama Materials; and (B) prominently display “Built with\
\ Llama” on a related website, user interface, blogpost, about page, or product\
\ documentation. If you use the Llama Materials or any outputs or results of the\
\ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\
\ which is distributed or made available, you shall also include “Llama” at the\
\ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\
\ derivative works thereof, from a Licensee as part of an integrated end user product,\
\ then Section 2 of this Agreement will not apply to you. \niii. You must retain\
\ in all copies of the Llama Materials that you distribute the following attribution\
\ notice within a “Notice” text file distributed as a part of such copies: “Llama\
\ 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms,\
\ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\
\ applicable laws and regulations (including trade compliance laws and regulations)\
\ and adhere to the Acceptable Use Policy for the Llama Materials (available at\
\ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\
\ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\
\ version release date, the monthly active users of the products or services made\
\ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\
\ monthly active users in the preceding calendar month, you must request a license\
\ from Meta, which Meta may grant to you in its sole discretion, and you are not\
\ authorized to exercise any of the rights under this Agreement unless or until\
\ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\
\ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\
\ ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\
\ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\
\ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\
\ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\
\ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\
\ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\
\ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\
\ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\
\ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\
\ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\
\ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\
a. No trademark licenses are granted under this Agreement, and in connection with\
\ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\
\ by or associated with the other or any of its affiliates, except as required\
\ for reasonable and customary use in describing and redistributing the Llama Materials\
\ or as set forth in this Section 5(a). Meta hereby grants you a license to use\
\ “Llama” (the “Mark”) solely as required to comply with the last sentence of Section\
\ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\
\ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\
\ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\
\ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\
\ respect to any derivative works and modifications of the Llama Materials that\
\ are made by you, as between you and Meta, you are and will be the owner of such\
\ derivative works and modifications.\nc. If you institute litigation or other proceedings\
\ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\
\ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\
\ of any of the foregoing, constitutes infringement of intellectual property or\
\ other rights owned or licensable by you, then any licenses granted to you under\
\ this Agreement shall terminate as of the date such litigation or claim is filed\
\ or instituted. You will indemnify and hold harmless Meta from and against any\
\ claim by any third party arising out of or related to your use or distribution\
\ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\
\ commence upon your acceptance of this Agreement or access to the Llama Materials\
\ and will continue in full force and effect until terminated in accordance with\
\ the terms and conditions herein. Meta may terminate this Agreement if you are\
\ in breach of any term or condition of this Agreement. Upon termination of this\
\ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\
\ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\
\ Jurisdiction. This Agreement will be governed and construed under the laws of\
\ the State of California without regard to choice of law principles, and the UN\
\ Convention on Contracts for the International Sale of Goods does not apply to\
\ this Agreement. The courts of California shall have exclusive jurisdiction of\
\ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\
Meta is committed to promoting safe and fair use of its tools and features, including\
\ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\
\ (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\
#### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\
\ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 3.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\
\ information about individuals, including information about individuals’ identity,\
\ health, or demographic information, unless you have obtained the right to do so\
\ in accordance with applicable law\n 5. Engage in or facilitate any action or\
\ generate any content that infringes, misappropriates, or otherwise violates any\
\ third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 6. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n 7. Engage in any action, or\
\ facilitate any action, to intentionally circumvent or remove usage restrictions\
\ or other safety measures, or to enable functionality disabled by Meta \n2. Engage\
\ in, promote, incite, facilitate, or assist in the planning or development of activities\
\ that present a risk of death or bodily harm to individuals, including use of Llama\
\ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\
\ applications, espionage, use for materials or activities that are subject to the\
\ International Traffic Arms Regulations (ITAR) maintained by the United States\
\ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\
\ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\
\ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\
\ substances\n 11. Operation of critical infrastructure, transportation technologies,\
\ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\
\ and eating disorders\n 13. Any content intended to incite or promote violence,\
\ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\
\ or mislead others, including use of Llama 3.2 related to the following:\n 14.\
\ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\
\ 15. Generating, promoting, or furthering defamatory content, including the\
\ creation of defamatory statements, images, or other content\n 16. Generating,\
\ promoting, or further distributing spam\n 17. Impersonating another individual\
\ without consent, authorization, or legal right\n 18. Representing that the\
\ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\
\ false online engagement, including fake reviews and other means of fake online\
\ engagement \n4. Fail to appropriately disclose to end users any known dangers\
\ of your AI system 5. Interact with third party tools, models, or software designed\
\ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\
\ that the outputs of such tools, models, or software are associated with Meta or\
\ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\
\ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\
\ are not being granted to you if you are an individual domiciled in, or a company\
\ with a principal place of business in, the European Union. This restriction does\
\ not apply to end users of a product or service that incorporates any such multimodal\
\ models.\n\nPlease report any violation of this Policy, software “bug,” or other\
\ problems that could lead to a violation of this Policy through one of the following\
\ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\
* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\
* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\
\ 3.2: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
## Model Information
The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
**Model Developer:** Meta
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
| | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff |
| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |
| Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 |
| | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
| Llama 3.2 Quantized (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 8k | Yes | Yes | Up to 9T tokens | December 2023 |
| | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
**Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
**Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** Sept 25, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
**Feedback:** Instructions on how to provide feedback or comments on the model can be found in the Llama Models [README](https://github.com/meta-llama/llama-models/blob/main/README.md). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. Similarly, quantized models can be adapted for a variety of on-device use-cases with limited compute resources.
**Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card.
## How to use
This repository contains two versions of Llama-3.2-1B-Instruct, for use with transformers and with the original `llama` codebase.
### Use with transformers
Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
Make sure to update your transformers installation via `pip install --upgrade transformers`.
```python
import torch
from transformers import pipeline
model_id = "meta-llama/Llama-3.2-1B-Instruct"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes)
### Use with `llama`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama)
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Llama-3.2-1B-Instruct --include "original/*" --local-dir Llama-3.2-1B-Instruct
```
## Hardware and Software
**Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, quantization, annotation, and evaluation were also performed on production infrastructure.
**Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
**Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq.
| | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) |
| :---- | :---: | ----- | :---: | :---: | :---: |
| Llama 3.2 1B | 370k | \- | 700 | 107 | 0 |
| Llama 3.2 3B | 460k | \- | 700 | 133 | 0 |
| Llama 3.2 1B SpinQuant | 1.7 | 0 | 700 | *Negligible*\*\* | 0 |
| Llama 3.2 3B SpinQuant | 2.4 | 0 | 700 | *Negligible*\*\* | 0 |
| Llama 3.2 1B QLora | 1.3k | 0 | 700 | 0.381 | 0 |
| Llama 3.2 3B QLora | 1.6k | 0 | 700 | 0.461 | 0 |
| Total | 833k | 86k | | 240 | 0 |
\*\* The location-based CO2e emissions of Llama 3.2 1B SpinQuant and Llama 3.2 3B SpinQuant are less than 0.001 metric tonnes each. This is due to the minimal training GPU hours that are required.
The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
## Training Data
**Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO).
**Data Freshness:** The pretraining data has a cutoff of December 2023\.
## Quantization
### Quantization Scheme
We designed the current quantization scheme with the [PyTorch’s ExecuTorch](https://github.com/pytorch/executorch) inference framework and Arm CPU backend in mind, taking into account metrics including model quality, prefill/decoding speed, and memory footprint. Our quantization scheme involves three parts:
- All linear layers in all transformer blocks are quantized to a 4-bit groupwise scheme (with a group size of 32) for weights and 8-bit per-token dynamic quantization for activations.
- The classification layer is quantized to 8-bit per-channel for weight and 8-bit per token dynamic quantization for activation.
- Similar to classification layer, an 8-bit per channel quantization is used for embedding layer.
### Quantization-Aware Training and LoRA
The quantization-aware training (QAT) with low-rank adaptation (LoRA) models went through only post-training stages, using the same data as the full precision models. To initialize QAT, we utilize BF16 Llama 3.2 model checkpoints obtained after supervised fine-tuning (SFT) and perform an additional full round of SFT training with QAT. We then freeze the backbone of the QAT model and perform another round of SFT with LoRA adaptors applied to all layers within the transformer block. Meanwhile, the LoRA adaptors' weights and activations are maintained in BF16. Because our approach is similar to QLoRA of Dettmers et al., (2023) (i.e., quantization followed by LoRA adapters), we refer this method as QLoRA. Finally, we fine-tune the resulting model (both backbone and LoRA adaptors) using direct preference optimization (DPO).
### SpinQuant
[SpinQuant](https://arxiv.org/abs/2405.16406) was applied, together with generative post-training quantization (GPTQ). For the SpinQuant rotation matrix fine-tuning, we optimized for 100 iterations, using 800 samples with sequence-length 2048 from the WikiText 2 dataset. For GPTQ, we used 128 samples from the same dataset with the same sequence-length.
## Benchmarks \- English Text
In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library.
### Base Pretrained Models
| Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B |
| ----- | ----- | :---: | :---: | :---: | :---: | :---: |
| General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 |
| | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 |
| | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 |
| Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 |
| | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 |
| | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 |
| Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 |
### Instruction Tuned Models
| Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B bf16 | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B bf16 | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B |
| :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| General | | MMLU | 5 | macro\_avg/acc | 49.3 | 43.3 | 47.3 | 49.0 | 63.4 | 60.5 | 62 | 62.4 | 69.4 |
| Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 39.2 | 40.9 | 41.2 | 40.1 | 40.3 | 40.8 | 40.7 | 40.9 |
| Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 14.9 | 16.7 | 16.8 | 19.0 | 19.1 | 19.2 | 19.1 | 17.2 |
| Instruction following | | IFEval | 0 | Avg(Prompt/Instruction acc Loose/Strict) | 59.5 | 51.5 | 58.4 | 55.6 | 77.4 | 73.9 | 73.5 | 75.9 | 80.4 |
| Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 33.1 | 40.6 | 46.5 | 77.7 | 72.9 | 75.7 | 77.9 | 84.5 |
| | | MATH (CoT) | 0 | final\_em | 30.6 | 20.5 | 25.3 | 31.0 | 48.0 | 44.2 | 45.3 | 49.2 | 51.9 |
| Reasoning | | ARC-C | 0 | acc | 59.4 | 54.3 | 57 | 60.7 | 78.6 | 75.6 | 77.6 | 77.6 | 83.4 |
| | | GPQA | 0 | acc | 27.2 | 25.9 | 26.3 | 25.9 | 32.8 | 32.8 | 31.7 | 33.9 | 32.8 |
| | | Hellaswag | 0 | acc | 41.2 | 38.1 | 41.3 | 41.5 | 69.8 | 66.3 | 68 | 66.3 | 78.7 |
| Tool Use | | BFCL V2 | 0 | acc | 25.7 | 14.3 | 15.9 | 23.7 | 67.0 | 53.4 | 60.1 | 63.5 | 67.1 |
| | | Nexus | 0 | macro\_avg/acc | 13.5 | 5.2 | 9.6 | 12.5 | 34.3 | 32.4 | 31.5 | 30.1 | 38.5 |
| Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | N/A | N/A | N/A | 19.8 | N/A | N/A | N/A | 27.3 |
| | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | N/A | N/A | N/A | 63.3 | N/A | N/A | N/A | 72.2 |
| | | NIH/Multi-needle | 0 | recall | 75.0 | N/A | N/A | N/A | 84.7 | N/A | N/A | N/A | 98.8 |
| Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 13.7 | 18.2 | 24.4 | 58.2 | 48.9 | 54.3 | 56.8 | 68.9 |
\*\*for comparison purposes only. Model not released.
### Multilingual Benchmarks
| Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| General | MMLU (5-shot, macro_avg/acc) | Portuguese | 39.8 | 34.9 | 38.9 | 40.2 | 54.5 | 50.9 | 53.3 | 53.4 | 62.1 |
| | | Spanish | 41.5 | 36.0 | 39.8 | 41.8 | 55.1 | 51.9 | 53.6 | 53.6 | 62.5 |
| | | Italian | 39.8 | 34.9 | 38.1 | 40.6 | 53.8 | 49.9 | 52.1 | 51.7 | 61.6 |
| | | German | 39.2 | 34.9 | 37.5 | 39.6 | 53.3 | 50.0 | 52.2 | 51.3 | 60.6 |
| | | French | 40.5 | 34.8 | 39.2 | 40.8 | 54.6 | 51.2 | 53.3 | 53.3 | 62.3 |
| | | Hindi | 33.5 | 30.0 | 32.1 | 34.0 | 43.3 | 40.4 | 42.0 | 42.1 | 50.9 |
| | | Thai | 34.7 | 31.2 | 32.4 | 34.9 | 44.5 | 41.3 | 44.0 | 42.2 | 50.3 |
\*\*for comparison purposes only. Model not released.
## Inference time
In the below table, we compare the performance metrics of different quantization methods (SpinQuant and QAT \+ LoRA) with the BF16 baseline. The evaluation was done using the [ExecuTorch](https://github.com/pytorch/executorch) framework as the inference engine, with the ARM CPU as a backend using Android OnePlus 12 device.
| Category | Decode (tokens/sec) | Time-to-first-token (sec) | Prefill (tokens/sec) | Model size (PTE file size in MB) | Memory size (RSS in MB) |
| :---- | ----- | ----- | ----- | ----- | ----- |
| 1B BF16 (baseline) | 19.2 | 1.0 | 60.3 | 2358 | 3,185 |
| 1B SpinQuant | 50.2 (2.6x) | 0.3 (-76.9%) | 260.5 (4.3x) | 1083 (-54.1%) | 1,921 (-39.7%) |
| 1B QLoRA | 45.8 (2.4x) | 0.3 (-76.0%) | 252.0 (4.2x) | 1127 (-52.2%) | 2,255 (-29.2%) |
| 3B BF16 (baseline) | 7.6 | 3.0 | 21.2 | 6129 | 7,419 |
| 3B SpinQuant | 19.7 (2.6x) | 0.7 (-76.4%) | 89.7 (4.2x) | 2435 (-60.3%) | 3,726 (-49.8%) |
| 3B QLoRA | 18.5 (2.4x) | 0.7 (-76.1%) | 88.8 (4.2x) | 2529 (-58.7%) | 4,060 (-45.3%) |
(\*) The performance measurement is done using an adb binary-based approach.
(\*\*) It is measured on an Android OnePlus 12 device.
(\*\*\*) Time-to-first-token (TTFT) is measured with prompt length=64
*Footnote:*
- *Decode (tokens/second) is for how quickly it keeps generating. Higher is better.*
- *Time-to-first-token (TTFT for shorthand) is for how fast it generates the first token for a given prompt. Lower is better.*
- *Prefill is the inverse of TTFT (aka 1/TTFT) in tokens/second. Higher is better*
- *Model size \- how big is the model, measured by, PTE file, a binary file format for ExecuTorch*
- *RSS size \- Memory usage in resident set size (RSS)*
## Responsibility & Safety
As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks:
1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama
2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm
3. Provide protections for the community to help prevent the misuse of our models
### Responsible Deployment
**Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/).
#### Llama 3.2 Instruct
**Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/).
**Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control.
**Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines.
#### Llama 3.2 Systems
**Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box.
### New Capabilities and Use Cases
**Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well.
**Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version.
### Evaluations
**Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case.
**Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets.
### Critical Risks
In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas:
**1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models.
**2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
**3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed.
Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models.
### Community
**Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
**Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists).
**Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
**Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
**Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
| null |
Non_BioNLP
|
## Model Information
The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
**Model Developer:** Meta
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
| | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff |
| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |
| Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 |
| | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
| Llama 3.2 Quantized (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 8k | Yes | Yes | Up to 9T tokens | December 2023 |
| | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
**Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
**Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** Sept 25, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
**Feedback:** Instructions on how to provide feedback or comments on the model can be found in the Llama Models [README](https://github.com/meta-llama/llama-models/blob/main/README.md). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. Similarly, quantized models can be adapted for a variety of on-device use-cases with limited compute resources.
**Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card.
## How to use
This repository contains two versions of Llama-3.2-1B-Instruct, for use with transformers and with the original `llama` codebase.
### Use with transformers
Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
Make sure to update your transformers installation via `pip install --upgrade transformers`.
```python
import torch
from transformers import pipeline
model_id = "meta-llama/Llama-3.2-1B-Instruct"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes)
### Use with `llama`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama)
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Llama-3.2-1B-Instruct --include "original/*" --local-dir Llama-3.2-1B-Instruct
```
## Hardware and Software
**Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, quantization, annotation, and evaluation were also performed on production infrastructure.
**Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
**Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq.
| | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) |
| :---- | :---: | ----- | :---: | :---: | :---: |
| Llama 3.2 1B | 370k | \- | 700 | 107 | 0 |
| Llama 3.2 3B | 460k | \- | 700 | 133 | 0 |
| Llama 3.2 1B SpinQuant | 1.7 | 0 | 700 | *Negligible*\*\* | 0 |
| Llama 3.2 3B SpinQuant | 2.4 | 0 | 700 | *Negligible*\*\* | 0 |
| Llama 3.2 1B QLora | 1.3k | 0 | 700 | 0.381 | 0 |
| Llama 3.2 3B QLora | 1.6k | 0 | 700 | 0.461 | 0 |
| Total | 833k | 86k | | 240 | 0 |
\*\* The location-based CO2e emissions of Llama 3.2 1B SpinQuant and Llama 3.2 3B SpinQuant are less than 0.001 metric tonnes each. This is due to the minimal training GPU hours that are required.
The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
## Training Data
**Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO).
**Data Freshness:** The pretraining data has a cutoff of December 2023\.
## Quantization
### Quantization Scheme
We designed the current quantization scheme with the [PyTorch’s ExecuTorch](https://github.com/pytorch/executorch) inference framework and Arm CPU backend in mind, taking into account metrics including model quality, prefill/decoding speed, and memory footprint. Our quantization scheme involves three parts:
- All linear layers in all transformer blocks are quantized to a 4-bit groupwise scheme (with a group size of 32) for weights and 8-bit per-token dynamic quantization for activations.
- The classification layer is quantized to 8-bit per-channel for weight and 8-bit per token dynamic quantization for activation.
- Similar to classification layer, an 8-bit per channel quantization is used for embedding layer.
### Quantization-Aware Training and LoRA
The quantization-aware training (QAT) with low-rank adaptation (LoRA) models went through only post-training stages, using the same data as the full precision models. To initialize QAT, we utilize BF16 Llama 3.2 model checkpoints obtained after supervised fine-tuning (SFT) and perform an additional full round of SFT training with QAT. We then freeze the backbone of the QAT model and perform another round of SFT with LoRA adaptors applied to all layers within the transformer block. Meanwhile, the LoRA adaptors' weights and activations are maintained in BF16. Because our approach is similar to QLoRA of Dettmers et al., (2023) (i.e., quantization followed by LoRA adapters), we refer this method as QLoRA. Finally, we fine-tune the resulting model (both backbone and LoRA adaptors) using direct preference optimization (DPO).
### SpinQuant
[SpinQuant](https://arxiv.org/abs/2405.16406) was applied, together with generative post-training quantization (GPTQ). For the SpinQuant rotation matrix fine-tuning, we optimized for 100 iterations, using 800 samples with sequence-length 2048 from the WikiText 2 dataset. For GPTQ, we used 128 samples from the same dataset with the same sequence-length.
## Benchmarks \- English Text
In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library.
### Base Pretrained Models
| Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B |
| ----- | ----- | :---: | :---: | :---: | :---: | :---: |
| General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 |
| | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 |
| | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 |
| Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 |
| | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 |
| | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 |
| Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 |
### Instruction Tuned Models
| Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B bf16 | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B bf16 | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B |
| :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| General | | MMLU | 5 | macro\_avg/acc | 49.3 | 43.3 | 47.3 | 49.0 | 63.4 | 60.5 | 62 | 62.4 | 69.4 |
| Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 39.2 | 40.9 | 41.2 | 40.1 | 40.3 | 40.8 | 40.7 | 40.9 |
| Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 14.9 | 16.7 | 16.8 | 19.0 | 19.1 | 19.2 | 19.1 | 17.2 |
| Instruction following | | IFEval | 0 | Avg(Prompt/Instruction acc Loose/Strict) | 59.5 | 51.5 | 58.4 | 55.6 | 77.4 | 73.9 | 73.5 | 75.9 | 80.4 |
| Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 33.1 | 40.6 | 46.5 | 77.7 | 72.9 | 75.7 | 77.9 | 84.5 |
| | | MATH (CoT) | 0 | final\_em | 30.6 | 20.5 | 25.3 | 31.0 | 48.0 | 44.2 | 45.3 | 49.2 | 51.9 |
| Reasoning | | ARC-C | 0 | acc | 59.4 | 54.3 | 57 | 60.7 | 78.6 | 75.6 | 77.6 | 77.6 | 83.4 |
| | | GPQA | 0 | acc | 27.2 | 25.9 | 26.3 | 25.9 | 32.8 | 32.8 | 31.7 | 33.9 | 32.8 |
| | | Hellaswag | 0 | acc | 41.2 | 38.1 | 41.3 | 41.5 | 69.8 | 66.3 | 68 | 66.3 | 78.7 |
| Tool Use | | BFCL V2 | 0 | acc | 25.7 | 14.3 | 15.9 | 23.7 | 67.0 | 53.4 | 60.1 | 63.5 | 67.1 |
| | | Nexus | 0 | macro\_avg/acc | 13.5 | 5.2 | 9.6 | 12.5 | 34.3 | 32.4 | 31.5 | 30.1 | 38.5 |
| Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | N/A | N/A | N/A | 19.8 | N/A | N/A | N/A | 27.3 |
| | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | N/A | N/A | N/A | 63.3 | N/A | N/A | N/A | 72.2 |
| | | NIH/Multi-needle | 0 | recall | 75.0 | N/A | N/A | N/A | 84.7 | N/A | N/A | N/A | 98.8 |
| Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 13.7 | 18.2 | 24.4 | 58.2 | 48.9 | 54.3 | 56.8 | 68.9 |
\*\*for comparison purposes only. Model not released.
### Multilingual Benchmarks
| Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| General | MMLU (5-shot, macro_avg/acc) | Portuguese | 39.8 | 34.9 | 38.9 | 40.2 | 54.5 | 50.9 | 53.3 | 53.4 | 62.1 |
| | | Spanish | 41.5 | 36.0 | 39.8 | 41.8 | 55.1 | 51.9 | 53.6 | 53.6 | 62.5 |
| | | Italian | 39.8 | 34.9 | 38.1 | 40.6 | 53.8 | 49.9 | 52.1 | 51.7 | 61.6 |
| | | German | 39.2 | 34.9 | 37.5 | 39.6 | 53.3 | 50.0 | 52.2 | 51.3 | 60.6 |
| | | French | 40.5 | 34.8 | 39.2 | 40.8 | 54.6 | 51.2 | 53.3 | 53.3 | 62.3 |
| | | Hindi | 33.5 | 30.0 | 32.1 | 34.0 | 43.3 | 40.4 | 42.0 | 42.1 | 50.9 |
| | | Thai | 34.7 | 31.2 | 32.4 | 34.9 | 44.5 | 41.3 | 44.0 | 42.2 | 50.3 |
\*\*for comparison purposes only. Model not released.
## Inference time
In the below table, we compare the performance metrics of different quantization methods (SpinQuant and QAT \+ LoRA) with the BF16 baseline. The evaluation was done using the [ExecuTorch](https://github.com/pytorch/executorch) framework as the inference engine, with the ARM CPU as a backend using Android OnePlus 12 device.
| Category | Decode (tokens/sec) | Time-to-first-token (sec) | Prefill (tokens/sec) | Model size (PTE file size in MB) | Memory size (RSS in MB) |
| :---- | ----- | ----- | ----- | ----- | ----- |
| 1B BF16 (baseline) | 19.2 | 1.0 | 60.3 | 2358 | 3,185 |
| 1B SpinQuant | 50.2 (2.6x) | 0.3 (-76.9%) | 260.5 (4.3x) | 1083 (-54.1%) | 1,921 (-39.7%) |
| 1B QLoRA | 45.8 (2.4x) | 0.3 (-76.0%) | 252.0 (4.2x) | 1127 (-52.2%) | 2,255 (-29.2%) |
| 3B BF16 (baseline) | 7.6 | 3.0 | 21.2 | 6129 | 7,419 |
| 3B SpinQuant | 19.7 (2.6x) | 0.7 (-76.4%) | 89.7 (4.2x) | 2435 (-60.3%) | 3,726 (-49.8%) |
| 3B QLoRA | 18.5 (2.4x) | 0.7 (-76.1%) | 88.8 (4.2x) | 2529 (-58.7%) | 4,060 (-45.3%) |
(\*) The performance measurement is done using an adb binary-based approach.
(\*\*) It is measured on an Android OnePlus 12 device.
(\*\*\*) Time-to-first-token (TTFT) is measured with prompt length=64
*Footnote:*
- *Decode (tokens/second) is for how quickly it keeps generating. Higher is better.*
- *Time-to-first-token (TTFT for shorthand) is for how fast it generates the first token for a given prompt. Lower is better.*
- *Prefill is the inverse of TTFT (aka 1/TTFT) in tokens/second. Higher is better*
- *Model size \- how big is the model, measured by, PTE file, a binary file format for ExecuTorch*
- *RSS size \- Memory usage in resident set size (RSS)*
## Responsibility & Safety
As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks:
1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama
2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm
3. Provide protections for the community to help prevent the misuse of our models
### Responsible Deployment
**Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/).
#### Llama 3.2 Instruct
**Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/).
**Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control.
**Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines.
#### Llama 3.2 Systems
**Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box.
### New Capabilities and Use Cases
**Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well.
**Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version.
### Evaluations
**Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case.
**Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets.
### Critical Risks
In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas:
**1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models.
**2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
**3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed.
Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models.
### Community
**Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
**Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists).
**Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
**Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
**Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
|
{"language": ["en", "de", "fr", "it", "pt", "hi", "es", "th"], "library_name": "transformers", "license": "llama3.2", "pipeline_tag": "text-generation", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3"], "extra_gated_prompt": "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.\n\n“Documentation” means the specifications, manuals and documentation accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\n“Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. \nb. Redistribution and Use. \ni. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Llama” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama” at the beginning of any such AI model name.\nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. \niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n#### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate the law or others’ rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices\n 4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law\n 5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials\n 6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system\n 7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by Meta \n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following:\n 8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled substances\n 11. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following:\n 14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n 15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 16. Generating, promoting, or further distributing spam\n 17. Impersonating another individual without consent, authorization, or legal right\n 18. Representing that the use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement \n4. Fail to appropriately disclose to end users any known dangers of your AI system 5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models.\n\nPlease report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: [email protected]", "extra_gated_fields": {"First Name": "text", "Last Name": "text", "Date of birth": "date_picker", "Country": "country", "Affiliation": "text", "Job title": {"type": "select", "options": ["Student", "Research Graduate", "AI researcher", "AI developer/engineer", "Reporter", "Other"]}, "geo": "ip_location", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy": "checkbox"}, "extra_gated_description": "The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).", "extra_gated_button_content": "Submit"}
|
task
|
[
"SUMMARIZATION"
] | 42,457 |
cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-all
|
cardiffnlp
|
text-classification
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"dataset:cardiffnlp/tweet_topic_multi",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-09-27T16:54:10Z |
2022-09-30T00:30:54+00:00
| 16 | 0 |
---
datasets:
- cardiffnlp/tweet_topic_multi
metrics:
- f1
- accuracy
pipeline_tag: text-classification
widget:
- text: I'm sure the {@Tampa Bay Lightning@} would’ve rather faced the Flyers but
man does their experience versus the Blue Jackets this year and last help them
a lot versus this Islanders team. Another meat grinder upcoming for the good guys
example_title: Example 1
- text: Love to take night time bike rides at the jersey shore. Seaside Heights boardwalk.
Beautiful weather. Wishing everyone a safe Labor Day weekend in the US.
example_title: Example 2
model-index:
- name: cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-all
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: cardiffnlp/tweet_topic_multi
type: cardiffnlp/tweet_topic_multi
split: test_2021
args: cardiffnlp/tweet_topic_multi
metrics:
- type: f1
value: 0.7625128733264676
name: F1
- type: f1_macro
value: 0.6035334168546909
name: F1 (macro)
- type: accuracy
value: 0.547945205479452
name: Accuracy
---
# cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-all
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-2019-90m](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) on the [tweet_topic_multi](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi). This model is fine-tuned on `train_all` split and validated on `test_2021` split of tweet_topic.
Fine-tuning script can be found [here](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi/blob/main/lm_finetuning.py). It achieves the following results on the test_2021 set:
- F1 (micro): 0.7625128733264676
- F1 (macro): 0.6035334168546909
- Accuracy: 0.547945205479452
### Usage
```python
import math
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
def sigmoid(x):
return 1 / (1 + math.exp(-x))
tokenizer = AutoTokenizer.from_pretrained("cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-all")
model = AutoModelForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-all", problem_type="multi_label_classification")
model.eval()
class_mapping = model.config.id2label
with torch.no_grad():
text = #NewVideo Cray Dollas- Water- Ft. Charlie Rose- (Official Music Video)- {{URL}} via {@YouTube@} #watchandlearn {{USERNAME}}
tokens = tokenizer(text, return_tensors='pt')
output = model(**tokens)
flags = [sigmoid(s) > 0.5 for s in output[0][0].detach().tolist()]
topic = [class_mapping[n] for n, i in enumerate(flags) if i]
print(topic)
```
### Reference
```
@inproceedings{dimosthenis-etal-2022-twitter,
title = "{T}witter {T}opic {C}lassification",
author = "Antypas, Dimosthenis and
Ushio, Asahi and
Camacho-Collados, Jose and
Neves, Leonardo and
Silva, Vitor and
Barbieri, Francesco",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics"
}
```
| null |
Non_BioNLP
|
# cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-all
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-2019-90m](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) on the [tweet_topic_multi](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi). This model is fine-tuned on `train_all` split and validated on `test_2021` split of tweet_topic.
Fine-tuning script can be found [here](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi/blob/main/lm_finetuning.py). It achieves the following results on the test_2021 set:
- F1 (micro): 0.7625128733264676
- F1 (macro): 0.6035334168546909
- Accuracy: 0.547945205479452
### Usage
```python
import math
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
def sigmoid(x):
return 1 / (1 + math.exp(-x))
tokenizer = AutoTokenizer.from_pretrained("cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-all")
model = AutoModelForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-all", problem_type="multi_label_classification")
model.eval()
class_mapping = model.config.id2label
with torch.no_grad():
text = #NewVideo Cray Dollas- Water- Ft. Charlie Rose- (Official Music Video)- {{URL}} via {@YouTube@} #watchandlearn {{USERNAME}}
tokens = tokenizer(text, return_tensors='pt')
output = model(**tokens)
flags = [sigmoid(s) > 0.5 for s in output[0][0].detach().tolist()]
topic = [class_mapping[n] for n, i in enumerate(flags) if i]
print(topic)
```
### Reference
```
@inproceedings{dimosthenis-etal-2022-twitter,
title = "{T}witter {T}opic {C}lassification",
author = "Antypas, Dimosthenis and
Ushio, Asahi and
Camacho-Collados, Jose and
Neves, Leonardo and
Silva, Vitor and
Barbieri, Francesco",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics"
}
```
|
{"datasets": ["cardiffnlp/tweet_topic_multi"], "metrics": ["f1", "accuracy"], "pipeline_tag": "text-classification", "widget": [{"text": "I'm sure the {@Tampa Bay Lightning@} would’ve rather faced the Flyers but man does their experience versus the Blue Jackets this year and last help them a lot versus this Islanders team. Another meat grinder upcoming for the good guys", "example_title": "Example 1"}, {"text": "Love to take night time bike rides at the jersey shore. Seaside Heights boardwalk. Beautiful weather. Wishing everyone a safe Labor Day weekend in the US.", "example_title": "Example 2"}], "model-index": [{"name": "cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-all", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "cardiffnlp/tweet_topic_multi", "type": "cardiffnlp/tweet_topic_multi", "split": "test_2021", "args": "cardiffnlp/tweet_topic_multi"}, "metrics": [{"type": "f1", "value": 0.7625128733264676, "name": "F1"}, {"type": "f1_macro", "value": 0.6035334168546909, "name": "F1 (macro)"}, {"type": "accuracy", "value": 0.547945205479452, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,458 |
youssefkhalil320/all-MiniLM-L6-v2-pairscore
|
youssefkhalil320
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:510287",
"loss:CoSENTLoss",
"en",
"arxiv:1908.10084",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-10-07T12:41:12Z |
2024-10-08T14:35:01+00:00
| 14 | 0 |
---
base_model: sentence-transformers/all-MiniLM-L6-v2
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:510287
- loss:CoSENTLoss
widget:
- source_sentence: bag
sentences:
- bag
- summer colors bag
- carry all bag
- source_sentence: bean bag
sentences:
- bag
- havan bag
- black yellow shoes
- source_sentence: pyramid shaped cushion mattress
sentences:
- dress
- silver bag
- women shoes
- source_sentence: handcrafted rug
sentences:
- amaga cross bag - white
- handcrafted boots
- polyester top
- source_sentence: bean bag
sentences:
- bag
- v-neck dress
- bag
model-index:
- name: all-MiniLM-L6-v2-pair_score
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev
type: sts-dev
metrics:
- type: pearson_cosine
value: -0.13726370961372045
name: Pearson Cosine
- type: spearman_cosine
value: -0.16645918619928507
name: Spearman Cosine
- type: pearson_manhattan
value: -0.1405300294713842
name: Pearson Manhattan
- type: spearman_manhattan
value: -0.16334559546016153
name: Spearman Manhattan
- type: pearson_euclidean
value: -0.1432496898556385
name: Pearson Euclidean
- type: spearman_euclidean
value: -0.16645904911745338
name: Spearman Euclidean
- type: pearson_dot
value: -0.13726370008450378
name: Pearson Dot
- type: spearman_dot
value: -0.1664594964294906
name: Spearman Dot
- type: pearson_max
value: -0.13726370008450378
name: Pearson Max
- type: spearman_max
value: -0.16334559546016153
name: Spearman Max
---
# all-MiniLM-L6-v2-pair_score
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision 8b3219a92973c328a8e22fadcfa821b5dc75636a -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'bean bag',
'bag',
'v-neck dress',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-dev`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:------------|
| pearson_cosine | -0.1373 |
| **spearman_cosine** | **-0.1665** |
| pearson_manhattan | -0.1405 |
| spearman_manhattan | -0.1633 |
| pearson_euclidean | -0.1432 |
| spearman_euclidean | -0.1665 |
| pearson_dot | -0.1373 |
| spearman_dot | -0.1665 |
| pearson_max | -0.1373 |
| spearman_max | -0.1633 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | loss | sts-dev_spearman_cosine |
|:------:|:-----:|:-------------:|:------:|:-----------------------:|
| 0 | 0 | - | - | -0.1665 |
| 0.0063 | 100 | 11.9622 | - | - |
| 0.0125 | 200 | 11.265 | - | - |
| 0.0188 | 300 | 10.5195 | - | - |
| 0.0251 | 400 | 9.4744 | - | - |
| 0.0314 | 500 | 8.4815 | 8.6217 | - |
| 0.0376 | 600 | 7.6105 | - | - |
| 0.0439 | 700 | 6.8023 | - | - |
| 0.0502 | 800 | 6.1258 | - | - |
| 0.0564 | 900 | 5.5032 | - | - |
| 0.0627 | 1000 | 5.0397 | 5.1949 | - |
| 0.0690 | 1100 | 4.6909 | - | - |
| 0.0752 | 1200 | 4.5716 | - | - |
| 0.0815 | 1300 | 4.3983 | - | - |
| 0.0878 | 1400 | 4.2073 | - | - |
| 0.0941 | 1500 | 4.2164 | 4.1422 | - |
| 0.1003 | 1600 | 4.0921 | - | - |
| 0.1066 | 1700 | 4.1785 | - | - |
| 0.1129 | 1800 | 4.0503 | - | - |
| 0.1191 | 1900 | 3.8969 | - | - |
| 0.1254 | 2000 | 3.8538 | 3.9109 | - |
| 0.1317 | 2100 | 3.872 | - | - |
| 0.1380 | 2200 | 3.851 | - | - |
| 0.1442 | 2300 | 3.6301 | - | - |
| 0.1505 | 2400 | 3.5202 | - | - |
| 0.1568 | 2500 | 3.6759 | 3.6389 | - |
| 0.1630 | 2600 | 3.4106 | - | - |
| 0.1693 | 2700 | 3.69 | - | - |
| 0.1756 | 2800 | 3.6336 | - | - |
| 0.1819 | 2900 | 3.4715 | - | - |
| 0.1881 | 3000 | 3.2166 | 3.2739 | - |
| 0.1944 | 3100 | 3.3844 | - | - |
| 0.2007 | 3200 | 3.4449 | - | - |
| 0.2069 | 3300 | 3.0811 | - | - |
| 0.2132 | 3400 | 3.2777 | - | - |
| 0.2195 | 3500 | 2.9505 | 3.0865 | - |
| 0.2257 | 3600 | 3.1534 | - | - |
| 0.2320 | 3700 | 2.9669 | - | - |
| 0.2383 | 3800 | 2.9416 | - | - |
| 0.2446 | 3900 | 2.9637 | - | - |
| 0.2508 | 4000 | 2.9322 | 2.8447 | - |
| 0.2571 | 4100 | 2.6926 | - | - |
| 0.2634 | 4200 | 2.9353 | - | - |
| 0.2696 | 4300 | 2.635 | - | - |
| 0.2759 | 4400 | 2.5692 | - | - |
| 0.2822 | 4500 | 3.0283 | 2.9033 | - |
| 0.2885 | 4600 | 2.5804 | - | - |
| 0.2947 | 4700 | 3.1374 | - | - |
| 0.3010 | 4800 | 2.8479 | - | - |
| 0.3073 | 4900 | 2.6809 | - | - |
| 0.3135 | 5000 | 2.8267 | 2.6946 | - |
| 0.3198 | 5100 | 2.7341 | - | - |
| 0.3261 | 5200 | 2.8157 | - | - |
| 0.3324 | 5300 | 2.5867 | - | - |
| 0.3386 | 5400 | 2.8622 | - | - |
| 0.3449 | 5500 | 2.9063 | 2.6115 | - |
| 0.3512 | 5600 | 2.1514 | - | - |
| 0.3574 | 5700 | 2.3755 | - | - |
| 0.3637 | 5800 | 2.5055 | - | - |
| 0.3700 | 5900 | 3.3237 | - | - |
| 0.3762 | 6000 | 2.561 | 2.7512 | - |
| 0.3825 | 6100 | 2.4351 | - | - |
| 0.3888 | 6200 | 2.8472 | - | - |
| 0.3951 | 6300 | 2.76 | - | - |
| 0.4013 | 6400 | 2.1947 | - | - |
| 0.4076 | 6500 | 2.6409 | 2.5367 | - |
| 0.4139 | 6600 | 2.7262 | - | - |
| 0.4201 | 6700 | 2.7781 | - | - |
| 0.4264 | 6800 | 2.4718 | - | - |
| 0.4327 | 6900 | 2.567 | - | - |
| 0.4390 | 7000 | 2.4215 | 2.3409 | - |
| 0.4452 | 7100 | 1.9308 | - | - |
| 0.4515 | 7200 | 2.1232 | - | - |
| 0.4578 | 7300 | 2.421 | - | - |
| 0.4640 | 7400 | 2.3232 | - | - |
| 0.4703 | 7500 | 2.8543 | 2.3706 | - |
| 0.4766 | 7600 | 2.4276 | - | - |
| 0.4828 | 7700 | 2.4507 | - | - |
| 0.4891 | 7800 | 2.1963 | - | - |
| 0.4954 | 7900 | 2.4247 | - | - |
| 0.5017 | 8000 | 2.1948 | 2.5729 | - |
| 0.5079 | 8100 | 2.4069 | - | - |
| 0.5142 | 8200 | 2.4328 | - | - |
| 0.5205 | 8300 | 2.2198 | - | - |
| 0.5267 | 8400 | 2.1746 | - | - |
| 0.5330 | 8500 | 2.2618 | 2.3459 | - |
| 0.5393 | 8600 | 2.3909 | - | - |
| 0.5456 | 8700 | 2.035 | - | - |
| 0.5518 | 8800 | 2.2626 | - | - |
| 0.5581 | 8900 | 2.1541 | - | - |
| 0.5644 | 9000 | 1.9424 | 2.1625 | - |
| 0.5706 | 9100 | 2.5152 | - | - |
| 0.5769 | 9200 | 2.0462 | - | - |
| 0.5832 | 9300 | 1.6124 | - | - |
| 0.5895 | 9400 | 2.2236 | - | - |
| 0.5957 | 9500 | 2.4706 | 2.0569 | - |
| 0.6020 | 9600 | 2.4612 | - | - |
| 0.6083 | 9700 | 2.2784 | - | - |
| 0.6145 | 9800 | 1.9335 | - | - |
| 0.6208 | 9900 | 2.3779 | - | - |
| 0.6271 | 10000 | 1.6778 | 2.1123 | - |
| 0.6333 | 10100 | 2.4721 | - | - |
| 0.6396 | 10200 | 1.7822 | - | - |
| 0.6459 | 10300 | 2.077 | - | - |
| 0.6522 | 10400 | 1.9223 | - | - |
| 0.6584 | 10500 | 2.3513 | 1.8403 | - |
| 0.6647 | 10600 | 2.1387 | - | - |
| 0.6710 | 10700 | 2.1853 | - | - |
| 0.6772 | 10800 | 1.8715 | - | - |
| 0.6835 | 10900 | 1.8581 | - | - |
| 0.6898 | 11000 | 2.0076 | 2.0063 | - |
| 0.6961 | 11100 | 2.3144 | - | - |
| 0.7023 | 11200 | 2.0942 | - | - |
| 0.7086 | 11300 | 1.9117 | - | - |
| 0.7149 | 11400 | 2.2214 | - | - |
| 0.7211 | 11500 | 1.9678 | 1.9029 | - |
| 0.7274 | 11600 | 1.7459 | - | - |
| 0.7337 | 11700 | 2.0616 | - | - |
| 0.7400 | 11800 | 1.6169 | - | - |
| 0.7462 | 11900 | 1.5674 | - | - |
| 0.7525 | 12000 | 1.4956 | 1.8267 | - |
| 0.7588 | 12100 | 2.3816 | - | - |
| 0.7650 | 12200 | 2.2387 | - | - |
| 0.7713 | 12300 | 1.4625 | - | - |
| 0.7776 | 12400 | 2.028 | - | - |
| 0.7838 | 12500 | 2.151 | 1.7581 | - |
| 0.7901 | 12600 | 1.6896 | - | - |
| 0.7964 | 12700 | 1.8526 | - | - |
| 0.8027 | 12800 | 1.9745 | - | - |
| 0.8089 | 12900 | 2.1042 | - | - |
| 0.8152 | 13000 | 1.83 | 1.5667 | - |
| 0.8215 | 13100 | 1.7451 | - | - |
| 0.8277 | 13200 | 1.568 | - | - |
| 0.8340 | 13300 | 1.4432 | - | - |
| 0.8403 | 13400 | 1.9172 | - | - |
| 0.8466 | 13500 | 1.9438 | 1.6055 | - |
| 0.8528 | 13600 | 1.6488 | - | - |
| 0.8591 | 13700 | 1.8166 | - | - |
| 0.8654 | 13800 | 1.5929 | - | - |
| 0.8716 | 13900 | 1.2476 | - | - |
| 0.8779 | 14000 | 1.5236 | 1.8921 | - |
| 0.8842 | 14100 | 1.6538 | - | - |
| 0.8904 | 14200 | 1.8689 | - | - |
| 0.8967 | 14300 | 1.0831 | - | - |
| 0.9030 | 14400 | 1.7765 | - | - |
| 0.9093 | 14500 | 1.3548 | 1.6683 | - |
| 0.9155 | 14600 | 1.7792 | - | - |
| 0.9218 | 14700 | 1.73 | - | - |
| 0.9281 | 14800 | 1.5979 | - | - |
| 0.9343 | 14900 | 1.3678 | - | - |
| 0.9406 | 15000 | 2.0664 | 1.5161 | - |
| 0.9469 | 15100 | 1.4472 | - | - |
| 0.9532 | 15200 | 1.447 | - | - |
| 0.9594 | 15300 | 1.7261 | - | - |
| 0.9657 | 15400 | 1.4881 | - | - |
| 0.9720 | 15500 | 1.313 | 1.6227 | - |
| 0.9782 | 15600 | 1.4587 | - | - |
| 0.9845 | 15700 | 2.0982 | - | - |
| 0.9908 | 15800 | 1.4854 | - | - |
| 0.9971 | 15900 | 1.343 | - | - |
| 1.0033 | 16000 | 1.1795 | 1.5639 | - |
| 1.0096 | 16100 | 1.4001 | - | - |
| 1.0159 | 16200 | 1.3867 | - | - |
| 1.0221 | 16300 | 1.5191 | - | - |
| 1.0284 | 16400 | 1.4693 | - | - |
| 1.0347 | 16500 | 1.628 | 1.4716 | - |
| 1.0409 | 16600 | 1.0041 | - | - |
| 1.0472 | 16700 | 1.7728 | - | - |
| 1.0535 | 16800 | 1.5586 | - | - |
| 1.0598 | 16900 | 1.7229 | - | - |
| 1.0660 | 17000 | 1.5556 | 1.4676 | - |
| 1.0723 | 17100 | 1.2529 | - | - |
| 1.0786 | 17200 | 1.4787 | - | - |
| 1.0848 | 17300 | 1.1947 | - | - |
| 1.0911 | 17400 | 1.3014 | - | - |
| 1.0974 | 17500 | 1.3743 | 1.4624 | - |
| 1.1037 | 17600 | 1.3397 | - | - |
| 1.1099 | 17700 | 1.3062 | - | - |
| 1.1162 | 17800 | 1.3288 | - | - |
| 1.1225 | 17900 | 2.0002 | - | - |
| 1.1287 | 18000 | 2.0294 | 1.4185 | - |
| 1.1350 | 18100 | 1.5053 | - | - |
| 1.1413 | 18200 | 1.3657 | - | - |
| 1.1476 | 18300 | 1.3877 | - | - |
| 1.1538 | 18400 | 1.9034 | - | - |
| 1.1601 | 18500 | 1.4001 | 1.3813 | - |
| 1.1664 | 18600 | 1.7503 | - | - |
| 1.1726 | 18700 | 1.1482 | - | - |
| 1.1789 | 18800 | 1.0958 | - | - |
| 1.1852 | 18900 | 1.2657 | - | - |
| 1.1914 | 19000 | 1.3721 | 1.4702 | - |
| 1.1977 | 19100 | 1.2361 | - | - |
| 1.2040 | 19200 | 1.003 | - | - |
| 1.2103 | 19300 | 1.3677 | - | - |
| 1.2165 | 19400 | 1.668 | - | - |
| 1.2228 | 19500 | 1.2026 | 1.3641 | - |
| 1.2291 | 19600 | 1.1754 | - | - |
| 1.2353 | 19700 | 1.3196 | - | - |
| 1.2416 | 19800 | 1.4766 | - | - |
| 1.2479 | 19900 | 1.389 | - | - |
| 1.2542 | 20000 | 1.6974 | 1.3344 | - |
| 1.2604 | 20100 | 1.5036 | - | - |
| 1.2667 | 20200 | 1.1728 | - | - |
| 1.2730 | 20300 | 1.6058 | - | - |
| 1.2792 | 20400 | 1.5191 | - | - |
| 1.2855 | 20500 | 1.4516 | 1.3210 | - |
| 1.2918 | 20600 | 1.3485 | - | - |
| 1.2980 | 20700 | 1.2598 | - | - |
| 1.3043 | 20800 | 1.5871 | - | - |
| 1.3106 | 20900 | 1.1965 | - | - |
| 1.3169 | 21000 | 1.3983 | 1.2517 | - |
| 1.3231 | 21100 | 1.2605 | - | - |
| 1.3294 | 21200 | 1.5629 | - | - |
| 1.3357 | 21300 | 1.0668 | - | - |
| 1.3419 | 21400 | 1.1879 | - | - |
| 1.3482 | 21500 | 1.132 | 1.3881 | - |
| 1.3545 | 21600 | 1.7231 | - | - |
| 1.3608 | 21700 | 1.7636 | - | - |
| 1.3670 | 21800 | 1.1193 | - | - |
| 1.3733 | 21900 | 1.4662 | - | - |
| 1.3796 | 22000 | 2.0394 | 1.1927 | - |
| 1.3858 | 22100 | 1.1535 | - | - |
| 1.3921 | 22200 | 1.4592 | - | - |
| 1.3984 | 22300 | 1.276 | - | - |
| 1.4047 | 22400 | 1.2984 | - | - |
| 1.4109 | 22500 | 0.9741 | 1.2707 | - |
| 1.4172 | 22600 | 1.4253 | - | - |
| 1.4235 | 22700 | 1.0769 | - | - |
| 1.4297 | 22800 | 0.8276 | - | - |
| 1.4360 | 22900 | 1.2689 | - | - |
| 1.4423 | 23000 | 1.4817 | 1.2095 | - |
| 1.4485 | 23100 | 1.1522 | - | - |
| 1.4548 | 23200 | 0.8978 | - | - |
| 1.4611 | 23300 | 1.015 | - | - |
| 1.4674 | 23400 | 1.0351 | - | - |
| 1.4736 | 23500 | 1.3959 | 1.1969 | - |
| 1.4799 | 23600 | 1.2879 | - | - |
| 1.4862 | 23700 | 1.0651 | - | - |
| 1.4924 | 23800 | 1.1601 | - | - |
| 1.4987 | 23900 | 1.0034 | - | - |
| 1.5050 | 24000 | 1.3386 | 1.1590 | - |
| 1.5113 | 24100 | 1.142 | - | - |
| 1.5175 | 24200 | 1.3495 | - | - |
| 1.5238 | 24300 | 0.9993 | - | - |
| 1.5301 | 24400 | 0.9363 | - | - |
| 1.5363 | 24500 | 1.4402 | 1.2178 | - |
| 1.5426 | 24600 | 1.0648 | - | - |
| 1.5489 | 24700 | 1.5102 | - | - |
| 1.5552 | 24800 | 1.3415 | - | - |
| 1.5614 | 24900 | 0.7441 | - | - |
| 1.5677 | 25000 | 0.901 | 1.1982 | - |
| 1.5740 | 25100 | 1.3147 | - | - |
| 1.5802 | 25200 | 0.971 | - | - |
| 1.5865 | 25300 | 0.9988 | - | - |
| 1.5928 | 25400 | 1.1445 | - | - |
| 1.5990 | 25500 | 1.1018 | 1.1423 | - |
| 1.6053 | 25600 | 1.0902 | - | - |
| 1.6116 | 25700 | 1.2577 | - | - |
| 1.6179 | 25800 | 1.2005 | - | - |
| 1.6241 | 25900 | 1.2839 | - | - |
| 1.6304 | 26000 | 1.4122 | 1.1125 | - |
| 1.6367 | 26100 | 0.7832 | - | - |
| 1.6429 | 26200 | 1.3278 | - | - |
| 1.6492 | 26300 | 1.2055 | - | - |
| 1.6555 | 26400 | 1.5814 | - | - |
| 1.6618 | 26500 | 1.0393 | 1.0946 | - |
| 1.6680 | 26600 | 1.4531 | - | - |
| 1.6743 | 26700 | 1.4162 | - | - |
| 1.6806 | 26800 | 0.8498 | - | - |
| 1.6868 | 26900 | 1.1318 | - | - |
| 1.6931 | 27000 | 1.3287 | 1.0439 | - |
| 1.6994 | 27100 | 1.0886 | - | - |
| 1.7056 | 27200 | 0.8991 | - | - |
| 1.7119 | 27300 | 0.7563 | - | - |
| 1.7182 | 27400 | 0.9284 | - | - |
| 1.7245 | 27500 | 1.3388 | 1.0940 | - |
| 1.7307 | 27600 | 1.2951 | - | - |
| 1.7370 | 27700 | 0.9789 | - | - |
| 1.7433 | 27800 | 1.2898 | - | - |
| 1.7495 | 27900 | 0.9915 | - | - |
| 1.7558 | 28000 | 1.5349 | 1.0266 | - |
| 1.7621 | 28100 | 1.124 | - | - |
| 1.7684 | 28200 | 0.809 | - | - |
| 1.7746 | 28300 | 0.9617 | - | - |
| 1.7809 | 28400 | 1.3061 | - | - |
| 1.7872 | 28500 | 1.1323 | 1.0488 | - |
| 1.7934 | 28600 | 1.2991 | - | - |
| 1.7997 | 28700 | 0.8708 | - | - |
| 1.8060 | 28800 | 0.7493 | - | - |
| 1.8123 | 28900 | 1.004 | - | - |
| 1.8185 | 29000 | 1.1477 | 1.0206 | - |
| 1.8248 | 29100 | 1.1826 | - | - |
| 1.8311 | 29200 | 1.0961 | - | - |
| 1.8373 | 29300 | 1.4743 | - | - |
| 1.8436 | 29400 | 0.8413 | - | - |
| 1.8499 | 29500 | 1.2623 | 1.0047 | - |
| 1.8561 | 29600 | 0.8486 | - | - |
| 1.8624 | 29700 | 1.4481 | - | - |
| 1.8687 | 29800 | 1.2704 | - | - |
| 1.8750 | 29900 | 1.1913 | - | - |
| 1.8812 | 30000 | 0.9369 | 1.0277 | - |
| 1.8875 | 30100 | 1.2427 | - | - |
| 1.8938 | 30200 | 1.0576 | - | - |
| 1.9000 | 30300 | 0.9188 | - | - |
| 1.9063 | 30400 | 1.3227 | - | - |
| 1.9126 | 30500 | 1.4614 | 1.0550 | - |
| 1.9189 | 30600 | 1.2316 | - | - |
| 1.9251 | 30700 | 0.9487 | - | - |
| 1.9314 | 30800 | 1.1651 | - | - |
| 1.9377 | 30900 | 1.1622 | - | - |
| 1.9439 | 31000 | 1.1801 | 0.9981 | - |
| 1.9502 | 31100 | 0.8798 | - | - |
| 1.9565 | 31200 | 0.7196 | - | - |
| 1.9628 | 31300 | 1.2003 | - | - |
| 1.9690 | 31400 | 1.1823 | - | - |
| 1.9753 | 31500 | 1.1453 | 1.0320 | - |
| 1.9816 | 31600 | 1.4751 | - | - |
| 1.9878 | 31700 | 0.8502 | - | - |
| 1.9941 | 31800 | 0.8757 | - | - |
| 2.0004 | 31900 | 1.0489 | - | - |
| 2.0066 | 32000 | 1.4672 | 1.0571 | - |
| 2.0129 | 32100 | 0.9474 | - | - |
| 2.0192 | 32200 | 0.8037 | - | - |
| 2.0255 | 32300 | 0.9782 | - | - |
| 2.0317 | 32400 | 0.6943 | - | - |
| 2.0380 | 32500 | 1.0097 | 0.9797 | - |
| 2.0443 | 32600 | 0.9067 | - | - |
| 2.0505 | 32700 | 1.09 | - | - |
| 2.0568 | 32800 | 0.8464 | - | - |
| 2.0631 | 32900 | 0.9359 | - | - |
| 2.0694 | 33000 | 0.813 | 0.9907 | - |
| 2.0756 | 33100 | 0.8738 | - | - |
| 2.0819 | 33200 | 0.8178 | - | - |
| 2.0882 | 33300 | 1.1704 | - | - |
| 2.0944 | 33400 | 1.0073 | - | - |
| 2.1007 | 33500 | 1.1849 | 0.9582 | - |
| 2.1070 | 33600 | 0.7795 | - | - |
| 2.1133 | 33700 | 0.7688 | - | - |
| 2.1195 | 33800 | 0.9465 | - | - |
| 2.1258 | 33900 | 1.0883 | - | - |
| 2.1321 | 34000 | 0.7711 | 0.9557 | - |
| 2.1383 | 34100 | 0.9767 | - | - |
| 2.1446 | 34200 | 0.6702 | - | - |
| 2.1509 | 34300 | 0.9444 | - | - |
| 2.1571 | 34400 | 0.8741 | - | - |
| 2.1634 | 34500 | 1.0717 | 0.9526 | - |
| 2.1697 | 34600 | 0.8584 | - | - |
| 2.1760 | 34700 | 0.8926 | - | - |
| 2.1822 | 34800 | 0.8567 | - | - |
| 2.1885 | 34900 | 0.71 | - | - |
| 2.1948 | 35000 | 1.1285 | 0.9589 | - |
| 2.2010 | 35100 | 0.8999 | - | - |
| 2.2073 | 35200 | 0.8459 | - | - |
| 2.2136 | 35300 | 1.0608 | - | - |
| 2.2199 | 35400 | 0.6115 | - | - |
| 2.2261 | 35500 | 1.2468 | 0.9769 | - |
| 2.2324 | 35600 | 0.9987 | - | - |
| 2.2387 | 35700 | 0.9186 | - | - |
| 2.2449 | 35800 | 1.0505 | - | - |
| 2.2512 | 35900 | 0.6253 | - | - |
| 2.2575 | 36000 | 0.6523 | 0.9501 | - |
| 2.2637 | 36100 | 0.8252 | - | - |
| 2.2700 | 36200 | 0.9793 | - | - |
| 2.2763 | 36300 | 0.8845 | - | - |
| 2.2826 | 36400 | 1.0121 | - | - |
| 2.2888 | 36500 | 0.9849 | 0.9245 | - |
| 2.2951 | 36600 | 1.2937 | - | - |
| 2.3014 | 36700 | 1.0484 | - | - |
| 2.3076 | 36800 | 0.8801 | - | - |
| 2.3139 | 36900 | 0.7552 | - | - |
| 2.3202 | 37000 | 0.7641 | 0.9280 | - |
| 2.3265 | 37100 | 0.883 | - | - |
| 2.3327 | 37200 | 0.77 | - | - |
| 2.3390 | 37300 | 1.2699 | - | - |
| 2.3453 | 37400 | 0.8766 | - | - |
| 2.3515 | 37500 | 1.1154 | 0.9623 | - |
| 2.3578 | 37600 | 1.0634 | - | - |
| 2.3641 | 37700 | 0.8822 | - | - |
| 2.3704 | 37800 | 0.839 | - | - |
| 2.3766 | 37900 | 0.684 | - | - |
| 2.3829 | 38000 | 0.8051 | 0.9198 | - |
| 2.3892 | 38100 | 0.9585 | - | - |
| 2.3954 | 38200 | 0.7156 | - | - |
| 2.4017 | 38300 | 0.5271 | - | - |
| 2.4080 | 38400 | 0.805 | - | - |
| 2.4142 | 38500 | 0.7898 | 0.8785 | - |
| 2.4205 | 38600 | 0.6935 | - | - |
| 2.4268 | 38700 | 0.8011 | - | - |
| 2.4331 | 38800 | 0.9812 | - | - |
| 2.4393 | 38900 | 0.4427 | - | - |
| 2.4456 | 39000 | 0.492 | 0.9313 | - |
| 2.4519 | 39100 | 0.47 | - | - |
| 2.4581 | 39200 | 1.1876 | - | - |
| 2.4644 | 39300 | 0.5778 | - | - |
| 2.4707 | 39400 | 0.6763 | - | - |
| 2.4770 | 39500 | 0.6896 | 0.8978 | - |
| 2.4832 | 39600 | 0.8905 | - | - |
| 2.4895 | 39700 | 0.7845 | - | - |
| 2.4958 | 39800 | 0.8691 | - | - |
| 2.5020 | 39900 | 0.55 | - | - |
| 2.5083 | 40000 | 0.6978 | 0.9054 | - |
| 2.5146 | 40100 | 0.6378 | - | - |
| 2.5209 | 40200 | 0.895 | - | - |
| 2.5271 | 40300 | 0.9683 | - | - |
| 2.5334 | 40400 | 0.9373 | - | - |
| 2.5397 | 40500 | 0.7406 | 0.9128 | - |
| 2.5459 | 40600 | 0.8917 | - | - |
| 2.5522 | 40700 | 1.0552 | - | - |
| 2.5585 | 40800 | 0.5281 | - | - |
| 2.5647 | 40900 | 0.9064 | - | - |
| 2.5710 | 41000 | 0.6886 | 0.9049 | - |
| 2.5773 | 41100 | 0.7166 | - | - |
| 2.5836 | 41200 | 0.8343 | - | - |
| 2.5898 | 41300 | 0.9468 | - | - |
| 2.5961 | 41400 | 0.8529 | - | - |
| 2.6024 | 41500 | 0.8092 | 0.8954 | - |
| 2.6086 | 41600 | 0.8501 | - | - |
| 2.6149 | 41700 | 0.9877 | - | - |
| 2.6212 | 41800 | 0.8592 | - | - |
| 2.6275 | 41900 | 0.8632 | - | - |
| 2.6337 | 42000 | 0.6766 | 0.8707 | - |
| 2.6400 | 42100 | 0.7587 | - | - |
| 2.6463 | 42200 | 0.8949 | - | - |
| 2.6525 | 42300 | 0.4173 | - | - |
| 2.6588 | 42400 | 0.5995 | - | - |
| 2.6651 | 42500 | 0.8157 | 0.8681 | - |
| 2.6713 | 42600 | 0.92 | - | - |
| 2.6776 | 42700 | 0.9118 | - | - |
| 2.6839 | 42800 | 0.7446 | - | - |
| 2.6902 | 42900 | 0.6835 | - | - |
| 2.6964 | 43000 | 0.6157 | 0.8691 | - |
| 2.7027 | 43100 | 0.5423 | - | - |
| 2.7090 | 43200 | 0.8098 | - | - |
| 2.7152 | 43300 | 0.8908 | - | - |
| 2.7215 | 43400 | 1.1275 | - | - |
| 2.7278 | 43500 | 1.0345 | 0.8884 | - |
| 2.7341 | 43600 | 0.6198 | - | - |
| 2.7403 | 43700 | 0.8315 | - | - |
| 2.7466 | 43800 | 0.9317 | - | - |
| 2.7529 | 43900 | 0.516 | - | - |
| 2.7591 | 44000 | 0.8229 | 0.8659 | - |
| 2.7654 | 44100 | 0.7989 | - | - |
| 2.7717 | 44200 | 0.9291 | - | - |
| 2.7780 | 44300 | 0.5954 | - | - |
| 2.7842 | 44400 | 0.8537 | - | - |
| 2.7905 | 44500 | 0.9506 | 0.8657 | - |
| 2.7968 | 44600 | 0.5789 | - | - |
| 2.8030 | 44700 | 0.4861 | - | - |
| 2.8093 | 44800 | 0.9614 | - | - |
| 2.8156 | 44900 | 1.0069 | - | - |
| 2.8218 | 45000 | 0.5599 | 0.8619 | - |
| 2.8281 | 45100 | 1.3747 | - | - |
| 2.8344 | 45200 | 0.5638 | - | - |
| 2.8407 | 45300 | 1.2095 | - | - |
| 2.8469 | 45400 | 0.7364 | - | - |
| 2.8532 | 45500 | 0.5692 | 0.8818 | - |
| 2.8595 | 45600 | 0.8848 | - | - |
| 2.8657 | 45700 | 0.9063 | - | - |
| 2.8720 | 45800 | 0.8675 | - | - |
| 2.8783 | 45900 | 0.9703 | - | - |
| 2.8846 | 46000 | 0.6657 | 0.8424 | - |
| 2.8908 | 46100 | 0.6564 | - | - |
| 2.8971 | 46200 | 0.7945 | - | - |
| 2.9034 | 46300 | 0.6341 | - | - |
| 2.9096 | 46400 | 1.042 | - | - |
| 2.9159 | 46500 | 1.0812 | 0.8510 | - |
| 2.9222 | 46600 | 0.9787 | - | - |
| 2.9285 | 46700 | 0.8732 | - | - |
| 2.9347 | 46800 | 1.1872 | - | - |
| 2.9410 | 46900 | 0.989 | - | - |
| 2.9473 | 47000 | 0.874 | 0.8215 | - |
| 2.9535 | 47100 | 1.0229 | - | - |
| 2.9598 | 47200 | 0.9888 | - | - |
| 2.9661 | 47300 | 0.4883 | - | - |
| 2.9723 | 47400 | 0.7474 | - | - |
| 2.9786 | 47500 | 0.7615 | 0.8218 | - |
| 2.9849 | 47600 | 0.6208 | - | - |
| 2.9912 | 47700 | 0.8332 | - | - |
| 2.9974 | 47800 | 0.6734 | - | - |
| 3.0037 | 47900 | 0.5095 | - | - |
| 3.0100 | 48000 | 0.7709 | 0.8220 | - |
| 3.0162 | 48100 | 0.5449 | - | - |
| 3.0225 | 48200 | 0.772 | - | - |
| 3.0288 | 48300 | 0.8582 | - | - |
| 3.0351 | 48400 | 0.5742 | - | - |
| 3.0413 | 48500 | 0.5584 | 0.8493 | - |
| 3.0476 | 48600 | 0.9766 | - | - |
| 3.0539 | 48700 | 0.6473 | - | - |
| 3.0601 | 48800 | 0.5861 | - | - |
| 3.0664 | 48900 | 0.6377 | - | - |
| 3.0727 | 49000 | 0.8393 | 0.8430 | - |
| 3.0789 | 49100 | 0.8385 | - | - |
| 3.0852 | 49200 | 0.5523 | - | - |
| 3.0915 | 49300 | 0.6217 | - | - |
| 3.0978 | 49400 | 0.5515 | - | - |
| 3.1040 | 49500 | 0.851 | 0.8000 | - |
| 3.1103 | 49600 | 0.9247 | - | - |
| 3.1166 | 49700 | 0.655 | - | - |
| 3.1228 | 49800 | 0.4979 | - | - |
| 3.1291 | 49900 | 0.7521 | - | - |
| 3.1354 | 50000 | 0.53 | 0.8105 | - |
| 3.1417 | 50100 | 0.5943 | - | - |
| 3.1479 | 50200 | 0.4659 | - | - |
| 3.1542 | 50300 | 0.4843 | - | - |
| 3.1605 | 50400 | 0.7577 | - | - |
| 3.1667 | 50500 | 0.3448 | 0.8055 | - |
| 3.1730 | 50600 | 0.8392 | - | - |
| 3.1793 | 50700 | 0.75 | - | - |
| 3.1856 | 50800 | 0.5195 | - | - |
| 3.1918 | 50900 | 0.617 | - | - |
| 3.1981 | 51000 | 0.6892 | 0.8293 | - |
| 3.2044 | 51100 | 0.497 | - | - |
| 3.2106 | 51200 | 0.6793 | - | - |
| 3.2169 | 51300 | 0.7251 | - | - |
| 3.2232 | 51400 | 0.6471 | - | - |
| 3.2294 | 51500 | 0.775 | 0.8013 | - |
| 3.2357 | 51600 | 0.7289 | - | - |
| 3.2420 | 51700 | 0.6894 | - | - |
| 3.2483 | 51800 | 0.5677 | - | - |
| 3.2545 | 51900 | 0.317 | - | - |
| 3.2608 | 52000 | 0.5376 | 0.7853 | - |
| 3.2671 | 52100 | 0.4582 | - | - |
| 3.2733 | 52200 | 0.8505 | - | - |
| 3.2796 | 52300 | 0.6236 | - | - |
| 3.2859 | 52400 | 0.7388 | - | - |
| 3.2922 | 52500 | 0.7061 | 0.7863 | - |
| 3.2984 | 52600 | 0.5411 | - | - |
| 3.3047 | 52700 | 0.9511 | - | - |
| 3.3110 | 52800 | 0.5364 | - | - |
| 3.3172 | 52900 | 0.5795 | - | - |
| 3.3235 | 53000 | 0.5305 | 0.7876 | - |
| 3.3298 | 53100 | 0.8051 | - | - |
| 3.3361 | 53200 | 0.5342 | - | - |
| 3.3423 | 53300 | 0.4567 | - | - |
| 3.3486 | 53400 | 0.9751 | - | - |
| 3.3549 | 53500 | 0.4413 | 0.8008 | - |
| 3.3611 | 53600 | 0.6011 | - | - |
| 3.3674 | 53700 | 0.4708 | - | - |
| 3.3737 | 53800 | 0.6167 | - | - |
| 3.3799 | 53900 | 0.7653 | - | - |
| 3.3862 | 54000 | 0.7781 | 0.7897 | - |
| 3.3925 | 54100 | 0.9323 | - | - |
| 3.3988 | 54200 | 0.6003 | - | - |
| 3.4050 | 54300 | 0.5268 | - | - |
| 3.4113 | 54400 | 0.6639 | - | - |
| 3.4176 | 54500 | 0.388 | 0.7855 | - |
| 3.4238 | 54600 | 0.7258 | - | - |
| 3.4301 | 54700 | 0.6475 | - | - |
| 3.4364 | 54800 | 0.795 | - | - |
| 3.4427 | 54900 | 0.4978 | - | - |
| 3.4489 | 55000 | 0.6259 | 0.7705 | - |
| 3.4552 | 55100 | 0.791 | - | - |
| 3.4615 | 55200 | 0.7602 | - | - |
| 3.4677 | 55300 | 0.2236 | - | - |
| 3.4740 | 55400 | 0.5577 | - | - |
| 3.4803 | 55500 | 0.4214 | 0.7683 | - |
| 3.4865 | 55600 | 0.7335 | - | - |
| 3.4928 | 55700 | 0.7536 | - | - |
| 3.4991 | 55800 | 0.4577 | - | - |
| 3.5054 | 55900 | 0.5869 | - | - |
| 3.5116 | 56000 | 0.8563 | 0.7587 | - |
| 3.5179 | 56100 | 0.9291 | - | - |
| 3.5242 | 56200 | 0.4387 | - | - |
| 3.5304 | 56300 | 0.4491 | - | - |
| 3.5367 | 56400 | 0.506 | - | - |
| 3.5430 | 56500 | 0.6626 | 0.7634 | - |
| 3.5493 | 56600 | 0.8654 | - | - |
| 3.5555 | 56700 | 0.4455 | - | - |
| 3.5618 | 56800 | 0.4593 | - | - |
| 3.5681 | 56900 | 0.878 | - | - |
| 3.5743 | 57000 | 0.3737 | 0.7617 | - |
| 3.5806 | 57100 | 0.377 | - | - |
| 3.5869 | 57200 | 0.6894 | - | - |
| 3.5932 | 57300 | 0.6635 | - | - |
| 3.5994 | 57400 | 0.9224 | - | - |
| 3.6057 | 57500 | 0.635 | 0.7669 | - |
| 3.6120 | 57600 | 0.6797 | - | - |
| 3.6182 | 57700 | 0.9814 | - | - |
| 3.6245 | 57800 | 0.9893 | - | - |
| 3.6308 | 57900 | 0.6753 | - | - |
| 3.6370 | 58000 | 0.8349 | 0.7501 | - |
| 3.6433 | 58100 | 0.8523 | - | - |
| 3.6496 | 58200 | 0.2962 | - | - |
| 3.6559 | 58300 | 0.6585 | - | - |
| 3.6621 | 58400 | 1.0247 | - | - |
| 3.6684 | 58500 | 0.8638 | 0.7577 | - |
| 3.6747 | 58600 | 0.9456 | - | - |
| 3.6809 | 58700 | 0.5401 | - | - |
| 3.6872 | 58800 | 0.6602 | - | - |
| 3.6935 | 58900 | 0.7543 | - | - |
| 3.6998 | 59000 | 0.7893 | 0.7600 | - |
| 3.7060 | 59100 | 0.7746 | - | - |
| 3.7123 | 59200 | 0.6539 | - | - |
| 3.7186 | 59300 | 0.8083 | - | - |
| 3.7248 | 59400 | 0.3429 | - | - |
| 3.7311 | 59500 | 0.5005 | 0.7445 | - |
| 3.7374 | 59600 | 0.6238 | - | - |
| 3.7437 | 59700 | 0.4343 | - | - |
| 3.7499 | 59800 | 0.8189 | - | - |
| 3.7562 | 59900 | 0.6272 | - | - |
| 3.7625 | 60000 | 0.2982 | 0.7597 | - |
| 3.7687 | 60100 | 0.7028 | - | - |
| 3.7750 | 60200 | 0.9447 | - | - |
| 3.7813 | 60300 | 0.6175 | - | - |
| 3.7875 | 60400 | 0.5856 | - | - |
| 3.7938 | 60500 | 0.8249 | 0.7505 | - |
| 3.8001 | 60600 | 0.6617 | - | - |
| 3.8064 | 60700 | 0.5767 | - | - |
| 3.8126 | 60800 | 1.0094 | - | - |
| 3.8189 | 60900 | 0.471 | - | - |
| 3.8252 | 61000 | 0.6313 | 0.7489 | - |
| 3.8314 | 61100 | 0.6545 | - | - |
| 3.8377 | 61200 | 0.699 | - | - |
| 3.8440 | 61300 | 0.6272 | - | - |
| 3.8503 | 61400 | 0.7375 | - | - |
| 3.8565 | 61500 | 0.4213 | 0.7490 | - |
| 3.8628 | 61600 | 0.6631 | - | - |
| 3.8691 | 61700 | 0.552 | - | - |
| 3.8753 | 61800 | 0.7041 | - | - |
| 3.8816 | 61900 | 0.8457 | - | - |
| 3.8879 | 62000 | 0.8104 | 0.7477 | - |
| 3.8941 | 62100 | 0.4494 | - | - |
| 3.9004 | 62200 | 0.6947 | - | - |
| 3.9067 | 62300 | 0.8061 | - | - |
| 3.9130 | 62400 | 0.416 | - | - |
| 3.9192 | 62500 | 0.7359 | 0.7468 | - |
| 3.9255 | 62600 | 0.7408 | - | - |
| 3.9318 | 62700 | 0.6255 | - | - |
| 3.9380 | 62800 | 0.7865 | - | - |
| 3.9443 | 62900 | 0.4879 | - | - |
| 3.9506 | 63000 | 0.5196 | 0.7485 | - |
| 3.9569 | 63100 | 0.5683 | - | - |
| 3.9631 | 63200 | 0.5141 | - | - |
| 3.9694 | 63300 | 0.6068 | - | - |
| 3.9757 | 63400 | 0.5929 | - | - |
| 3.9819 | 63500 | 0.7513 | 0.7482 | - |
| 3.9882 | 63600 | 0.5053 | - | - |
| 3.9945 | 63700 | 0.5707 | - | - |
</details>
### Framework Versions
- Python: 3.8.10
- Sentence Transformers: 3.1.1
- Transformers: 4.45.1
- PyTorch: 2.4.0+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.1
- Tokenizers: 0.20.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CoSENTLoss
```bibtex
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# all-MiniLM-L6-v2-pair_score
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision 8b3219a92973c328a8e22fadcfa821b5dc75636a -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'bean bag',
'bag',
'v-neck dress',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-dev`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:------------|
| pearson_cosine | -0.1373 |
| **spearman_cosine** | **-0.1665** |
| pearson_manhattan | -0.1405 |
| spearman_manhattan | -0.1633 |
| pearson_euclidean | -0.1432 |
| spearman_euclidean | -0.1665 |
| pearson_dot | -0.1373 |
| spearman_dot | -0.1665 |
| pearson_max | -0.1373 |
| spearman_max | -0.1633 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | loss | sts-dev_spearman_cosine |
|:------:|:-----:|:-------------:|:------:|:-----------------------:|
| 0 | 0 | - | - | -0.1665 |
| 0.0063 | 100 | 11.9622 | - | - |
| 0.0125 | 200 | 11.265 | - | - |
| 0.0188 | 300 | 10.5195 | - | - |
| 0.0251 | 400 | 9.4744 | - | - |
| 0.0314 | 500 | 8.4815 | 8.6217 | - |
| 0.0376 | 600 | 7.6105 | - | - |
| 0.0439 | 700 | 6.8023 | - | - |
| 0.0502 | 800 | 6.1258 | - | - |
| 0.0564 | 900 | 5.5032 | - | - |
| 0.0627 | 1000 | 5.0397 | 5.1949 | - |
| 0.0690 | 1100 | 4.6909 | - | - |
| 0.0752 | 1200 | 4.5716 | - | - |
| 0.0815 | 1300 | 4.3983 | - | - |
| 0.0878 | 1400 | 4.2073 | - | - |
| 0.0941 | 1500 | 4.2164 | 4.1422 | - |
| 0.1003 | 1600 | 4.0921 | - | - |
| 0.1066 | 1700 | 4.1785 | - | - |
| 0.1129 | 1800 | 4.0503 | - | - |
| 0.1191 | 1900 | 3.8969 | - | - |
| 0.1254 | 2000 | 3.8538 | 3.9109 | - |
| 0.1317 | 2100 | 3.872 | - | - |
| 0.1380 | 2200 | 3.851 | - | - |
| 0.1442 | 2300 | 3.6301 | - | - |
| 0.1505 | 2400 | 3.5202 | - | - |
| 0.1568 | 2500 | 3.6759 | 3.6389 | - |
| 0.1630 | 2600 | 3.4106 | - | - |
| 0.1693 | 2700 | 3.69 | - | - |
| 0.1756 | 2800 | 3.6336 | - | - |
| 0.1819 | 2900 | 3.4715 | - | - |
| 0.1881 | 3000 | 3.2166 | 3.2739 | - |
| 0.1944 | 3100 | 3.3844 | - | - |
| 0.2007 | 3200 | 3.4449 | - | - |
| 0.2069 | 3300 | 3.0811 | - | - |
| 0.2132 | 3400 | 3.2777 | - | - |
| 0.2195 | 3500 | 2.9505 | 3.0865 | - |
| 0.2257 | 3600 | 3.1534 | - | - |
| 0.2320 | 3700 | 2.9669 | - | - |
| 0.2383 | 3800 | 2.9416 | - | - |
| 0.2446 | 3900 | 2.9637 | - | - |
| 0.2508 | 4000 | 2.9322 | 2.8447 | - |
| 0.2571 | 4100 | 2.6926 | - | - |
| 0.2634 | 4200 | 2.9353 | - | - |
| 0.2696 | 4300 | 2.635 | - | - |
| 0.2759 | 4400 | 2.5692 | - | - |
| 0.2822 | 4500 | 3.0283 | 2.9033 | - |
| 0.2885 | 4600 | 2.5804 | - | - |
| 0.2947 | 4700 | 3.1374 | - | - |
| 0.3010 | 4800 | 2.8479 | - | - |
| 0.3073 | 4900 | 2.6809 | - | - |
| 0.3135 | 5000 | 2.8267 | 2.6946 | - |
| 0.3198 | 5100 | 2.7341 | - | - |
| 0.3261 | 5200 | 2.8157 | - | - |
| 0.3324 | 5300 | 2.5867 | - | - |
| 0.3386 | 5400 | 2.8622 | - | - |
| 0.3449 | 5500 | 2.9063 | 2.6115 | - |
| 0.3512 | 5600 | 2.1514 | - | - |
| 0.3574 | 5700 | 2.3755 | - | - |
| 0.3637 | 5800 | 2.5055 | - | - |
| 0.3700 | 5900 | 3.3237 | - | - |
| 0.3762 | 6000 | 2.561 | 2.7512 | - |
| 0.3825 | 6100 | 2.4351 | - | - |
| 0.3888 | 6200 | 2.8472 | - | - |
| 0.3951 | 6300 | 2.76 | - | - |
| 0.4013 | 6400 | 2.1947 | - | - |
| 0.4076 | 6500 | 2.6409 | 2.5367 | - |
| 0.4139 | 6600 | 2.7262 | - | - |
| 0.4201 | 6700 | 2.7781 | - | - |
| 0.4264 | 6800 | 2.4718 | - | - |
| 0.4327 | 6900 | 2.567 | - | - |
| 0.4390 | 7000 | 2.4215 | 2.3409 | - |
| 0.4452 | 7100 | 1.9308 | - | - |
| 0.4515 | 7200 | 2.1232 | - | - |
| 0.4578 | 7300 | 2.421 | - | - |
| 0.4640 | 7400 | 2.3232 | - | - |
| 0.4703 | 7500 | 2.8543 | 2.3706 | - |
| 0.4766 | 7600 | 2.4276 | - | - |
| 0.4828 | 7700 | 2.4507 | - | - |
| 0.4891 | 7800 | 2.1963 | - | - |
| 0.4954 | 7900 | 2.4247 | - | - |
| 0.5017 | 8000 | 2.1948 | 2.5729 | - |
| 0.5079 | 8100 | 2.4069 | - | - |
| 0.5142 | 8200 | 2.4328 | - | - |
| 0.5205 | 8300 | 2.2198 | - | - |
| 0.5267 | 8400 | 2.1746 | - | - |
| 0.5330 | 8500 | 2.2618 | 2.3459 | - |
| 0.5393 | 8600 | 2.3909 | - | - |
| 0.5456 | 8700 | 2.035 | - | - |
| 0.5518 | 8800 | 2.2626 | - | - |
| 0.5581 | 8900 | 2.1541 | - | - |
| 0.5644 | 9000 | 1.9424 | 2.1625 | - |
| 0.5706 | 9100 | 2.5152 | - | - |
| 0.5769 | 9200 | 2.0462 | - | - |
| 0.5832 | 9300 | 1.6124 | - | - |
| 0.5895 | 9400 | 2.2236 | - | - |
| 0.5957 | 9500 | 2.4706 | 2.0569 | - |
| 0.6020 | 9600 | 2.4612 | - | - |
| 0.6083 | 9700 | 2.2784 | - | - |
| 0.6145 | 9800 | 1.9335 | - | - |
| 0.6208 | 9900 | 2.3779 | - | - |
| 0.6271 | 10000 | 1.6778 | 2.1123 | - |
| 0.6333 | 10100 | 2.4721 | - | - |
| 0.6396 | 10200 | 1.7822 | - | - |
| 0.6459 | 10300 | 2.077 | - | - |
| 0.6522 | 10400 | 1.9223 | - | - |
| 0.6584 | 10500 | 2.3513 | 1.8403 | - |
| 0.6647 | 10600 | 2.1387 | - | - |
| 0.6710 | 10700 | 2.1853 | - | - |
| 0.6772 | 10800 | 1.8715 | - | - |
| 0.6835 | 10900 | 1.8581 | - | - |
| 0.6898 | 11000 | 2.0076 | 2.0063 | - |
| 0.6961 | 11100 | 2.3144 | - | - |
| 0.7023 | 11200 | 2.0942 | - | - |
| 0.7086 | 11300 | 1.9117 | - | - |
| 0.7149 | 11400 | 2.2214 | - | - |
| 0.7211 | 11500 | 1.9678 | 1.9029 | - |
| 0.7274 | 11600 | 1.7459 | - | - |
| 0.7337 | 11700 | 2.0616 | - | - |
| 0.7400 | 11800 | 1.6169 | - | - |
| 0.7462 | 11900 | 1.5674 | - | - |
| 0.7525 | 12000 | 1.4956 | 1.8267 | - |
| 0.7588 | 12100 | 2.3816 | - | - |
| 0.7650 | 12200 | 2.2387 | - | - |
| 0.7713 | 12300 | 1.4625 | - | - |
| 0.7776 | 12400 | 2.028 | - | - |
| 0.7838 | 12500 | 2.151 | 1.7581 | - |
| 0.7901 | 12600 | 1.6896 | - | - |
| 0.7964 | 12700 | 1.8526 | - | - |
| 0.8027 | 12800 | 1.9745 | - | - |
| 0.8089 | 12900 | 2.1042 | - | - |
| 0.8152 | 13000 | 1.83 | 1.5667 | - |
| 0.8215 | 13100 | 1.7451 | - | - |
| 0.8277 | 13200 | 1.568 | - | - |
| 0.8340 | 13300 | 1.4432 | - | - |
| 0.8403 | 13400 | 1.9172 | - | - |
| 0.8466 | 13500 | 1.9438 | 1.6055 | - |
| 0.8528 | 13600 | 1.6488 | - | - |
| 0.8591 | 13700 | 1.8166 | - | - |
| 0.8654 | 13800 | 1.5929 | - | - |
| 0.8716 | 13900 | 1.2476 | - | - |
| 0.8779 | 14000 | 1.5236 | 1.8921 | - |
| 0.8842 | 14100 | 1.6538 | - | - |
| 0.8904 | 14200 | 1.8689 | - | - |
| 0.8967 | 14300 | 1.0831 | - | - |
| 0.9030 | 14400 | 1.7765 | - | - |
| 0.9093 | 14500 | 1.3548 | 1.6683 | - |
| 0.9155 | 14600 | 1.7792 | - | - |
| 0.9218 | 14700 | 1.73 | - | - |
| 0.9281 | 14800 | 1.5979 | - | - |
| 0.9343 | 14900 | 1.3678 | - | - |
| 0.9406 | 15000 | 2.0664 | 1.5161 | - |
| 0.9469 | 15100 | 1.4472 | - | - |
| 0.9532 | 15200 | 1.447 | - | - |
| 0.9594 | 15300 | 1.7261 | - | - |
| 0.9657 | 15400 | 1.4881 | - | - |
| 0.9720 | 15500 | 1.313 | 1.6227 | - |
| 0.9782 | 15600 | 1.4587 | - | - |
| 0.9845 | 15700 | 2.0982 | - | - |
| 0.9908 | 15800 | 1.4854 | - | - |
| 0.9971 | 15900 | 1.343 | - | - |
| 1.0033 | 16000 | 1.1795 | 1.5639 | - |
| 1.0096 | 16100 | 1.4001 | - | - |
| 1.0159 | 16200 | 1.3867 | - | - |
| 1.0221 | 16300 | 1.5191 | - | - |
| 1.0284 | 16400 | 1.4693 | - | - |
| 1.0347 | 16500 | 1.628 | 1.4716 | - |
| 1.0409 | 16600 | 1.0041 | - | - |
| 1.0472 | 16700 | 1.7728 | - | - |
| 1.0535 | 16800 | 1.5586 | - | - |
| 1.0598 | 16900 | 1.7229 | - | - |
| 1.0660 | 17000 | 1.5556 | 1.4676 | - |
| 1.0723 | 17100 | 1.2529 | - | - |
| 1.0786 | 17200 | 1.4787 | - | - |
| 1.0848 | 17300 | 1.1947 | - | - |
| 1.0911 | 17400 | 1.3014 | - | - |
| 1.0974 | 17500 | 1.3743 | 1.4624 | - |
| 1.1037 | 17600 | 1.3397 | - | - |
| 1.1099 | 17700 | 1.3062 | - | - |
| 1.1162 | 17800 | 1.3288 | - | - |
| 1.1225 | 17900 | 2.0002 | - | - |
| 1.1287 | 18000 | 2.0294 | 1.4185 | - |
| 1.1350 | 18100 | 1.5053 | - | - |
| 1.1413 | 18200 | 1.3657 | - | - |
| 1.1476 | 18300 | 1.3877 | - | - |
| 1.1538 | 18400 | 1.9034 | - | - |
| 1.1601 | 18500 | 1.4001 | 1.3813 | - |
| 1.1664 | 18600 | 1.7503 | - | - |
| 1.1726 | 18700 | 1.1482 | - | - |
| 1.1789 | 18800 | 1.0958 | - | - |
| 1.1852 | 18900 | 1.2657 | - | - |
| 1.1914 | 19000 | 1.3721 | 1.4702 | - |
| 1.1977 | 19100 | 1.2361 | - | - |
| 1.2040 | 19200 | 1.003 | - | - |
| 1.2103 | 19300 | 1.3677 | - | - |
| 1.2165 | 19400 | 1.668 | - | - |
| 1.2228 | 19500 | 1.2026 | 1.3641 | - |
| 1.2291 | 19600 | 1.1754 | - | - |
| 1.2353 | 19700 | 1.3196 | - | - |
| 1.2416 | 19800 | 1.4766 | - | - |
| 1.2479 | 19900 | 1.389 | - | - |
| 1.2542 | 20000 | 1.6974 | 1.3344 | - |
| 1.2604 | 20100 | 1.5036 | - | - |
| 1.2667 | 20200 | 1.1728 | - | - |
| 1.2730 | 20300 | 1.6058 | - | - |
| 1.2792 | 20400 | 1.5191 | - | - |
| 1.2855 | 20500 | 1.4516 | 1.3210 | - |
| 1.2918 | 20600 | 1.3485 | - | - |
| 1.2980 | 20700 | 1.2598 | - | - |
| 1.3043 | 20800 | 1.5871 | - | - |
| 1.3106 | 20900 | 1.1965 | - | - |
| 1.3169 | 21000 | 1.3983 | 1.2517 | - |
| 1.3231 | 21100 | 1.2605 | - | - |
| 1.3294 | 21200 | 1.5629 | - | - |
| 1.3357 | 21300 | 1.0668 | - | - |
| 1.3419 | 21400 | 1.1879 | - | - |
| 1.3482 | 21500 | 1.132 | 1.3881 | - |
| 1.3545 | 21600 | 1.7231 | - | - |
| 1.3608 | 21700 | 1.7636 | - | - |
| 1.3670 | 21800 | 1.1193 | - | - |
| 1.3733 | 21900 | 1.4662 | - | - |
| 1.3796 | 22000 | 2.0394 | 1.1927 | - |
| 1.3858 | 22100 | 1.1535 | - | - |
| 1.3921 | 22200 | 1.4592 | - | - |
| 1.3984 | 22300 | 1.276 | - | - |
| 1.4047 | 22400 | 1.2984 | - | - |
| 1.4109 | 22500 | 0.9741 | 1.2707 | - |
| 1.4172 | 22600 | 1.4253 | - | - |
| 1.4235 | 22700 | 1.0769 | - | - |
| 1.4297 | 22800 | 0.8276 | - | - |
| 1.4360 | 22900 | 1.2689 | - | - |
| 1.4423 | 23000 | 1.4817 | 1.2095 | - |
| 1.4485 | 23100 | 1.1522 | - | - |
| 1.4548 | 23200 | 0.8978 | - | - |
| 1.4611 | 23300 | 1.015 | - | - |
| 1.4674 | 23400 | 1.0351 | - | - |
| 1.4736 | 23500 | 1.3959 | 1.1969 | - |
| 1.4799 | 23600 | 1.2879 | - | - |
| 1.4862 | 23700 | 1.0651 | - | - |
| 1.4924 | 23800 | 1.1601 | - | - |
| 1.4987 | 23900 | 1.0034 | - | - |
| 1.5050 | 24000 | 1.3386 | 1.1590 | - |
| 1.5113 | 24100 | 1.142 | - | - |
| 1.5175 | 24200 | 1.3495 | - | - |
| 1.5238 | 24300 | 0.9993 | - | - |
| 1.5301 | 24400 | 0.9363 | - | - |
| 1.5363 | 24500 | 1.4402 | 1.2178 | - |
| 1.5426 | 24600 | 1.0648 | - | - |
| 1.5489 | 24700 | 1.5102 | - | - |
| 1.5552 | 24800 | 1.3415 | - | - |
| 1.5614 | 24900 | 0.7441 | - | - |
| 1.5677 | 25000 | 0.901 | 1.1982 | - |
| 1.5740 | 25100 | 1.3147 | - | - |
| 1.5802 | 25200 | 0.971 | - | - |
| 1.5865 | 25300 | 0.9988 | - | - |
| 1.5928 | 25400 | 1.1445 | - | - |
| 1.5990 | 25500 | 1.1018 | 1.1423 | - |
| 1.6053 | 25600 | 1.0902 | - | - |
| 1.6116 | 25700 | 1.2577 | - | - |
| 1.6179 | 25800 | 1.2005 | - | - |
| 1.6241 | 25900 | 1.2839 | - | - |
| 1.6304 | 26000 | 1.4122 | 1.1125 | - |
| 1.6367 | 26100 | 0.7832 | - | - |
| 1.6429 | 26200 | 1.3278 | - | - |
| 1.6492 | 26300 | 1.2055 | - | - |
| 1.6555 | 26400 | 1.5814 | - | - |
| 1.6618 | 26500 | 1.0393 | 1.0946 | - |
| 1.6680 | 26600 | 1.4531 | - | - |
| 1.6743 | 26700 | 1.4162 | - | - |
| 1.6806 | 26800 | 0.8498 | - | - |
| 1.6868 | 26900 | 1.1318 | - | - |
| 1.6931 | 27000 | 1.3287 | 1.0439 | - |
| 1.6994 | 27100 | 1.0886 | - | - |
| 1.7056 | 27200 | 0.8991 | - | - |
| 1.7119 | 27300 | 0.7563 | - | - |
| 1.7182 | 27400 | 0.9284 | - | - |
| 1.7245 | 27500 | 1.3388 | 1.0940 | - |
| 1.7307 | 27600 | 1.2951 | - | - |
| 1.7370 | 27700 | 0.9789 | - | - |
| 1.7433 | 27800 | 1.2898 | - | - |
| 1.7495 | 27900 | 0.9915 | - | - |
| 1.7558 | 28000 | 1.5349 | 1.0266 | - |
| 1.7621 | 28100 | 1.124 | - | - |
| 1.7684 | 28200 | 0.809 | - | - |
| 1.7746 | 28300 | 0.9617 | - | - |
| 1.7809 | 28400 | 1.3061 | - | - |
| 1.7872 | 28500 | 1.1323 | 1.0488 | - |
| 1.7934 | 28600 | 1.2991 | - | - |
| 1.7997 | 28700 | 0.8708 | - | - |
| 1.8060 | 28800 | 0.7493 | - | - |
| 1.8123 | 28900 | 1.004 | - | - |
| 1.8185 | 29000 | 1.1477 | 1.0206 | - |
| 1.8248 | 29100 | 1.1826 | - | - |
| 1.8311 | 29200 | 1.0961 | - | - |
| 1.8373 | 29300 | 1.4743 | - | - |
| 1.8436 | 29400 | 0.8413 | - | - |
| 1.8499 | 29500 | 1.2623 | 1.0047 | - |
| 1.8561 | 29600 | 0.8486 | - | - |
| 1.8624 | 29700 | 1.4481 | - | - |
| 1.8687 | 29800 | 1.2704 | - | - |
| 1.8750 | 29900 | 1.1913 | - | - |
| 1.8812 | 30000 | 0.9369 | 1.0277 | - |
| 1.8875 | 30100 | 1.2427 | - | - |
| 1.8938 | 30200 | 1.0576 | - | - |
| 1.9000 | 30300 | 0.9188 | - | - |
| 1.9063 | 30400 | 1.3227 | - | - |
| 1.9126 | 30500 | 1.4614 | 1.0550 | - |
| 1.9189 | 30600 | 1.2316 | - | - |
| 1.9251 | 30700 | 0.9487 | - | - |
| 1.9314 | 30800 | 1.1651 | - | - |
| 1.9377 | 30900 | 1.1622 | - | - |
| 1.9439 | 31000 | 1.1801 | 0.9981 | - |
| 1.9502 | 31100 | 0.8798 | - | - |
| 1.9565 | 31200 | 0.7196 | - | - |
| 1.9628 | 31300 | 1.2003 | - | - |
| 1.9690 | 31400 | 1.1823 | - | - |
| 1.9753 | 31500 | 1.1453 | 1.0320 | - |
| 1.9816 | 31600 | 1.4751 | - | - |
| 1.9878 | 31700 | 0.8502 | - | - |
| 1.9941 | 31800 | 0.8757 | - | - |
| 2.0004 | 31900 | 1.0489 | - | - |
| 2.0066 | 32000 | 1.4672 | 1.0571 | - |
| 2.0129 | 32100 | 0.9474 | - | - |
| 2.0192 | 32200 | 0.8037 | - | - |
| 2.0255 | 32300 | 0.9782 | - | - |
| 2.0317 | 32400 | 0.6943 | - | - |
| 2.0380 | 32500 | 1.0097 | 0.9797 | - |
| 2.0443 | 32600 | 0.9067 | - | - |
| 2.0505 | 32700 | 1.09 | - | - |
| 2.0568 | 32800 | 0.8464 | - | - |
| 2.0631 | 32900 | 0.9359 | - | - |
| 2.0694 | 33000 | 0.813 | 0.9907 | - |
| 2.0756 | 33100 | 0.8738 | - | - |
| 2.0819 | 33200 | 0.8178 | - | - |
| 2.0882 | 33300 | 1.1704 | - | - |
| 2.0944 | 33400 | 1.0073 | - | - |
| 2.1007 | 33500 | 1.1849 | 0.9582 | - |
| 2.1070 | 33600 | 0.7795 | - | - |
| 2.1133 | 33700 | 0.7688 | - | - |
| 2.1195 | 33800 | 0.9465 | - | - |
| 2.1258 | 33900 | 1.0883 | - | - |
| 2.1321 | 34000 | 0.7711 | 0.9557 | - |
| 2.1383 | 34100 | 0.9767 | - | - |
| 2.1446 | 34200 | 0.6702 | - | - |
| 2.1509 | 34300 | 0.9444 | - | - |
| 2.1571 | 34400 | 0.8741 | - | - |
| 2.1634 | 34500 | 1.0717 | 0.9526 | - |
| 2.1697 | 34600 | 0.8584 | - | - |
| 2.1760 | 34700 | 0.8926 | - | - |
| 2.1822 | 34800 | 0.8567 | - | - |
| 2.1885 | 34900 | 0.71 | - | - |
| 2.1948 | 35000 | 1.1285 | 0.9589 | - |
| 2.2010 | 35100 | 0.8999 | - | - |
| 2.2073 | 35200 | 0.8459 | - | - |
| 2.2136 | 35300 | 1.0608 | - | - |
| 2.2199 | 35400 | 0.6115 | - | - |
| 2.2261 | 35500 | 1.2468 | 0.9769 | - |
| 2.2324 | 35600 | 0.9987 | - | - |
| 2.2387 | 35700 | 0.9186 | - | - |
| 2.2449 | 35800 | 1.0505 | - | - |
| 2.2512 | 35900 | 0.6253 | - | - |
| 2.2575 | 36000 | 0.6523 | 0.9501 | - |
| 2.2637 | 36100 | 0.8252 | - | - |
| 2.2700 | 36200 | 0.9793 | - | - |
| 2.2763 | 36300 | 0.8845 | - | - |
| 2.2826 | 36400 | 1.0121 | - | - |
| 2.2888 | 36500 | 0.9849 | 0.9245 | - |
| 2.2951 | 36600 | 1.2937 | - | - |
| 2.3014 | 36700 | 1.0484 | - | - |
| 2.3076 | 36800 | 0.8801 | - | - |
| 2.3139 | 36900 | 0.7552 | - | - |
| 2.3202 | 37000 | 0.7641 | 0.9280 | - |
| 2.3265 | 37100 | 0.883 | - | - |
| 2.3327 | 37200 | 0.77 | - | - |
| 2.3390 | 37300 | 1.2699 | - | - |
| 2.3453 | 37400 | 0.8766 | - | - |
| 2.3515 | 37500 | 1.1154 | 0.9623 | - |
| 2.3578 | 37600 | 1.0634 | - | - |
| 2.3641 | 37700 | 0.8822 | - | - |
| 2.3704 | 37800 | 0.839 | - | - |
| 2.3766 | 37900 | 0.684 | - | - |
| 2.3829 | 38000 | 0.8051 | 0.9198 | - |
| 2.3892 | 38100 | 0.9585 | - | - |
| 2.3954 | 38200 | 0.7156 | - | - |
| 2.4017 | 38300 | 0.5271 | - | - |
| 2.4080 | 38400 | 0.805 | - | - |
| 2.4142 | 38500 | 0.7898 | 0.8785 | - |
| 2.4205 | 38600 | 0.6935 | - | - |
| 2.4268 | 38700 | 0.8011 | - | - |
| 2.4331 | 38800 | 0.9812 | - | - |
| 2.4393 | 38900 | 0.4427 | - | - |
| 2.4456 | 39000 | 0.492 | 0.9313 | - |
| 2.4519 | 39100 | 0.47 | - | - |
| 2.4581 | 39200 | 1.1876 | - | - |
| 2.4644 | 39300 | 0.5778 | - | - |
| 2.4707 | 39400 | 0.6763 | - | - |
| 2.4770 | 39500 | 0.6896 | 0.8978 | - |
| 2.4832 | 39600 | 0.8905 | - | - |
| 2.4895 | 39700 | 0.7845 | - | - |
| 2.4958 | 39800 | 0.8691 | - | - |
| 2.5020 | 39900 | 0.55 | - | - |
| 2.5083 | 40000 | 0.6978 | 0.9054 | - |
| 2.5146 | 40100 | 0.6378 | - | - |
| 2.5209 | 40200 | 0.895 | - | - |
| 2.5271 | 40300 | 0.9683 | - | - |
| 2.5334 | 40400 | 0.9373 | - | - |
| 2.5397 | 40500 | 0.7406 | 0.9128 | - |
| 2.5459 | 40600 | 0.8917 | - | - |
| 2.5522 | 40700 | 1.0552 | - | - |
| 2.5585 | 40800 | 0.5281 | - | - |
| 2.5647 | 40900 | 0.9064 | - | - |
| 2.5710 | 41000 | 0.6886 | 0.9049 | - |
| 2.5773 | 41100 | 0.7166 | - | - |
| 2.5836 | 41200 | 0.8343 | - | - |
| 2.5898 | 41300 | 0.9468 | - | - |
| 2.5961 | 41400 | 0.8529 | - | - |
| 2.6024 | 41500 | 0.8092 | 0.8954 | - |
| 2.6086 | 41600 | 0.8501 | - | - |
| 2.6149 | 41700 | 0.9877 | - | - |
| 2.6212 | 41800 | 0.8592 | - | - |
| 2.6275 | 41900 | 0.8632 | - | - |
| 2.6337 | 42000 | 0.6766 | 0.8707 | - |
| 2.6400 | 42100 | 0.7587 | - | - |
| 2.6463 | 42200 | 0.8949 | - | - |
| 2.6525 | 42300 | 0.4173 | - | - |
| 2.6588 | 42400 | 0.5995 | - | - |
| 2.6651 | 42500 | 0.8157 | 0.8681 | - |
| 2.6713 | 42600 | 0.92 | - | - |
| 2.6776 | 42700 | 0.9118 | - | - |
| 2.6839 | 42800 | 0.7446 | - | - |
| 2.6902 | 42900 | 0.6835 | - | - |
| 2.6964 | 43000 | 0.6157 | 0.8691 | - |
| 2.7027 | 43100 | 0.5423 | - | - |
| 2.7090 | 43200 | 0.8098 | - | - |
| 2.7152 | 43300 | 0.8908 | - | - |
| 2.7215 | 43400 | 1.1275 | - | - |
| 2.7278 | 43500 | 1.0345 | 0.8884 | - |
| 2.7341 | 43600 | 0.6198 | - | - |
| 2.7403 | 43700 | 0.8315 | - | - |
| 2.7466 | 43800 | 0.9317 | - | - |
| 2.7529 | 43900 | 0.516 | - | - |
| 2.7591 | 44000 | 0.8229 | 0.8659 | - |
| 2.7654 | 44100 | 0.7989 | - | - |
| 2.7717 | 44200 | 0.9291 | - | - |
| 2.7780 | 44300 | 0.5954 | - | - |
| 2.7842 | 44400 | 0.8537 | - | - |
| 2.7905 | 44500 | 0.9506 | 0.8657 | - |
| 2.7968 | 44600 | 0.5789 | - | - |
| 2.8030 | 44700 | 0.4861 | - | - |
| 2.8093 | 44800 | 0.9614 | - | - |
| 2.8156 | 44900 | 1.0069 | - | - |
| 2.8218 | 45000 | 0.5599 | 0.8619 | - |
| 2.8281 | 45100 | 1.3747 | - | - |
| 2.8344 | 45200 | 0.5638 | - | - |
| 2.8407 | 45300 | 1.2095 | - | - |
| 2.8469 | 45400 | 0.7364 | - | - |
| 2.8532 | 45500 | 0.5692 | 0.8818 | - |
| 2.8595 | 45600 | 0.8848 | - | - |
| 2.8657 | 45700 | 0.9063 | - | - |
| 2.8720 | 45800 | 0.8675 | - | - |
| 2.8783 | 45900 | 0.9703 | - | - |
| 2.8846 | 46000 | 0.6657 | 0.8424 | - |
| 2.8908 | 46100 | 0.6564 | - | - |
| 2.8971 | 46200 | 0.7945 | - | - |
| 2.9034 | 46300 | 0.6341 | - | - |
| 2.9096 | 46400 | 1.042 | - | - |
| 2.9159 | 46500 | 1.0812 | 0.8510 | - |
| 2.9222 | 46600 | 0.9787 | - | - |
| 2.9285 | 46700 | 0.8732 | - | - |
| 2.9347 | 46800 | 1.1872 | - | - |
| 2.9410 | 46900 | 0.989 | - | - |
| 2.9473 | 47000 | 0.874 | 0.8215 | - |
| 2.9535 | 47100 | 1.0229 | - | - |
| 2.9598 | 47200 | 0.9888 | - | - |
| 2.9661 | 47300 | 0.4883 | - | - |
| 2.9723 | 47400 | 0.7474 | - | - |
| 2.9786 | 47500 | 0.7615 | 0.8218 | - |
| 2.9849 | 47600 | 0.6208 | - | - |
| 2.9912 | 47700 | 0.8332 | - | - |
| 2.9974 | 47800 | 0.6734 | - | - |
| 3.0037 | 47900 | 0.5095 | - | - |
| 3.0100 | 48000 | 0.7709 | 0.8220 | - |
| 3.0162 | 48100 | 0.5449 | - | - |
| 3.0225 | 48200 | 0.772 | - | - |
| 3.0288 | 48300 | 0.8582 | - | - |
| 3.0351 | 48400 | 0.5742 | - | - |
| 3.0413 | 48500 | 0.5584 | 0.8493 | - |
| 3.0476 | 48600 | 0.9766 | - | - |
| 3.0539 | 48700 | 0.6473 | - | - |
| 3.0601 | 48800 | 0.5861 | - | - |
| 3.0664 | 48900 | 0.6377 | - | - |
| 3.0727 | 49000 | 0.8393 | 0.8430 | - |
| 3.0789 | 49100 | 0.8385 | - | - |
| 3.0852 | 49200 | 0.5523 | - | - |
| 3.0915 | 49300 | 0.6217 | - | - |
| 3.0978 | 49400 | 0.5515 | - | - |
| 3.1040 | 49500 | 0.851 | 0.8000 | - |
| 3.1103 | 49600 | 0.9247 | - | - |
| 3.1166 | 49700 | 0.655 | - | - |
| 3.1228 | 49800 | 0.4979 | - | - |
| 3.1291 | 49900 | 0.7521 | - | - |
| 3.1354 | 50000 | 0.53 | 0.8105 | - |
| 3.1417 | 50100 | 0.5943 | - | - |
| 3.1479 | 50200 | 0.4659 | - | - |
| 3.1542 | 50300 | 0.4843 | - | - |
| 3.1605 | 50400 | 0.7577 | - | - |
| 3.1667 | 50500 | 0.3448 | 0.8055 | - |
| 3.1730 | 50600 | 0.8392 | - | - |
| 3.1793 | 50700 | 0.75 | - | - |
| 3.1856 | 50800 | 0.5195 | - | - |
| 3.1918 | 50900 | 0.617 | - | - |
| 3.1981 | 51000 | 0.6892 | 0.8293 | - |
| 3.2044 | 51100 | 0.497 | - | - |
| 3.2106 | 51200 | 0.6793 | - | - |
| 3.2169 | 51300 | 0.7251 | - | - |
| 3.2232 | 51400 | 0.6471 | - | - |
| 3.2294 | 51500 | 0.775 | 0.8013 | - |
| 3.2357 | 51600 | 0.7289 | - | - |
| 3.2420 | 51700 | 0.6894 | - | - |
| 3.2483 | 51800 | 0.5677 | - | - |
| 3.2545 | 51900 | 0.317 | - | - |
| 3.2608 | 52000 | 0.5376 | 0.7853 | - |
| 3.2671 | 52100 | 0.4582 | - | - |
| 3.2733 | 52200 | 0.8505 | - | - |
| 3.2796 | 52300 | 0.6236 | - | - |
| 3.2859 | 52400 | 0.7388 | - | - |
| 3.2922 | 52500 | 0.7061 | 0.7863 | - |
| 3.2984 | 52600 | 0.5411 | - | - |
| 3.3047 | 52700 | 0.9511 | - | - |
| 3.3110 | 52800 | 0.5364 | - | - |
| 3.3172 | 52900 | 0.5795 | - | - |
| 3.3235 | 53000 | 0.5305 | 0.7876 | - |
| 3.3298 | 53100 | 0.8051 | - | - |
| 3.3361 | 53200 | 0.5342 | - | - |
| 3.3423 | 53300 | 0.4567 | - | - |
| 3.3486 | 53400 | 0.9751 | - | - |
| 3.3549 | 53500 | 0.4413 | 0.8008 | - |
| 3.3611 | 53600 | 0.6011 | - | - |
| 3.3674 | 53700 | 0.4708 | - | - |
| 3.3737 | 53800 | 0.6167 | - | - |
| 3.3799 | 53900 | 0.7653 | - | - |
| 3.3862 | 54000 | 0.7781 | 0.7897 | - |
| 3.3925 | 54100 | 0.9323 | - | - |
| 3.3988 | 54200 | 0.6003 | - | - |
| 3.4050 | 54300 | 0.5268 | - | - |
| 3.4113 | 54400 | 0.6639 | - | - |
| 3.4176 | 54500 | 0.388 | 0.7855 | - |
| 3.4238 | 54600 | 0.7258 | - | - |
| 3.4301 | 54700 | 0.6475 | - | - |
| 3.4364 | 54800 | 0.795 | - | - |
| 3.4427 | 54900 | 0.4978 | - | - |
| 3.4489 | 55000 | 0.6259 | 0.7705 | - |
| 3.4552 | 55100 | 0.791 | - | - |
| 3.4615 | 55200 | 0.7602 | - | - |
| 3.4677 | 55300 | 0.2236 | - | - |
| 3.4740 | 55400 | 0.5577 | - | - |
| 3.4803 | 55500 | 0.4214 | 0.7683 | - |
| 3.4865 | 55600 | 0.7335 | - | - |
| 3.4928 | 55700 | 0.7536 | - | - |
| 3.4991 | 55800 | 0.4577 | - | - |
| 3.5054 | 55900 | 0.5869 | - | - |
| 3.5116 | 56000 | 0.8563 | 0.7587 | - |
| 3.5179 | 56100 | 0.9291 | - | - |
| 3.5242 | 56200 | 0.4387 | - | - |
| 3.5304 | 56300 | 0.4491 | - | - |
| 3.5367 | 56400 | 0.506 | - | - |
| 3.5430 | 56500 | 0.6626 | 0.7634 | - |
| 3.5493 | 56600 | 0.8654 | - | - |
| 3.5555 | 56700 | 0.4455 | - | - |
| 3.5618 | 56800 | 0.4593 | - | - |
| 3.5681 | 56900 | 0.878 | - | - |
| 3.5743 | 57000 | 0.3737 | 0.7617 | - |
| 3.5806 | 57100 | 0.377 | - | - |
| 3.5869 | 57200 | 0.6894 | - | - |
| 3.5932 | 57300 | 0.6635 | - | - |
| 3.5994 | 57400 | 0.9224 | - | - |
| 3.6057 | 57500 | 0.635 | 0.7669 | - |
| 3.6120 | 57600 | 0.6797 | - | - |
| 3.6182 | 57700 | 0.9814 | - | - |
| 3.6245 | 57800 | 0.9893 | - | - |
| 3.6308 | 57900 | 0.6753 | - | - |
| 3.6370 | 58000 | 0.8349 | 0.7501 | - |
| 3.6433 | 58100 | 0.8523 | - | - |
| 3.6496 | 58200 | 0.2962 | - | - |
| 3.6559 | 58300 | 0.6585 | - | - |
| 3.6621 | 58400 | 1.0247 | - | - |
| 3.6684 | 58500 | 0.8638 | 0.7577 | - |
| 3.6747 | 58600 | 0.9456 | - | - |
| 3.6809 | 58700 | 0.5401 | - | - |
| 3.6872 | 58800 | 0.6602 | - | - |
| 3.6935 | 58900 | 0.7543 | - | - |
| 3.6998 | 59000 | 0.7893 | 0.7600 | - |
| 3.7060 | 59100 | 0.7746 | - | - |
| 3.7123 | 59200 | 0.6539 | - | - |
| 3.7186 | 59300 | 0.8083 | - | - |
| 3.7248 | 59400 | 0.3429 | - | - |
| 3.7311 | 59500 | 0.5005 | 0.7445 | - |
| 3.7374 | 59600 | 0.6238 | - | - |
| 3.7437 | 59700 | 0.4343 | - | - |
| 3.7499 | 59800 | 0.8189 | - | - |
| 3.7562 | 59900 | 0.6272 | - | - |
| 3.7625 | 60000 | 0.2982 | 0.7597 | - |
| 3.7687 | 60100 | 0.7028 | - | - |
| 3.7750 | 60200 | 0.9447 | - | - |
| 3.7813 | 60300 | 0.6175 | - | - |
| 3.7875 | 60400 | 0.5856 | - | - |
| 3.7938 | 60500 | 0.8249 | 0.7505 | - |
| 3.8001 | 60600 | 0.6617 | - | - |
| 3.8064 | 60700 | 0.5767 | - | - |
| 3.8126 | 60800 | 1.0094 | - | - |
| 3.8189 | 60900 | 0.471 | - | - |
| 3.8252 | 61000 | 0.6313 | 0.7489 | - |
| 3.8314 | 61100 | 0.6545 | - | - |
| 3.8377 | 61200 | 0.699 | - | - |
| 3.8440 | 61300 | 0.6272 | - | - |
| 3.8503 | 61400 | 0.7375 | - | - |
| 3.8565 | 61500 | 0.4213 | 0.7490 | - |
| 3.8628 | 61600 | 0.6631 | - | - |
| 3.8691 | 61700 | 0.552 | - | - |
| 3.8753 | 61800 | 0.7041 | - | - |
| 3.8816 | 61900 | 0.8457 | - | - |
| 3.8879 | 62000 | 0.8104 | 0.7477 | - |
| 3.8941 | 62100 | 0.4494 | - | - |
| 3.9004 | 62200 | 0.6947 | - | - |
| 3.9067 | 62300 | 0.8061 | - | - |
| 3.9130 | 62400 | 0.416 | - | - |
| 3.9192 | 62500 | 0.7359 | 0.7468 | - |
| 3.9255 | 62600 | 0.7408 | - | - |
| 3.9318 | 62700 | 0.6255 | - | - |
| 3.9380 | 62800 | 0.7865 | - | - |
| 3.9443 | 62900 | 0.4879 | - | - |
| 3.9506 | 63000 | 0.5196 | 0.7485 | - |
| 3.9569 | 63100 | 0.5683 | - | - |
| 3.9631 | 63200 | 0.5141 | - | - |
| 3.9694 | 63300 | 0.6068 | - | - |
| 3.9757 | 63400 | 0.5929 | - | - |
| 3.9819 | 63500 | 0.7513 | 0.7482 | - |
| 3.9882 | 63600 | 0.5053 | - | - |
| 3.9945 | 63700 | 0.5707 | - | - |
</details>
### Framework Versions
- Python: 3.8.10
- Sentence Transformers: 3.1.1
- Transformers: 4.45.1
- PyTorch: 2.4.0+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.1
- Tokenizers: 0.20.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CoSENTLoss
```bibtex
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "sentence-transformers/all-MiniLM-L6-v2", "language": ["en"], "library_name": "sentence-transformers", "license": "apache-2.0", "metrics": ["pearson_cosine", "spearman_cosine", "pearson_manhattan", "spearman_manhattan", "pearson_euclidean", "spearman_euclidean", "pearson_dot", "spearman_dot", "pearson_max", "spearman_max"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:510287", "loss:CoSENTLoss"], "widget": [{"source_sentence": "bag", "sentences": ["bag", "summer colors bag", "carry all bag"]}, {"source_sentence": "bean bag", "sentences": ["bag", "havan bag", "black yellow shoes"]}, {"source_sentence": "pyramid shaped cushion mattress", "sentences": ["dress", "silver bag", "women shoes"]}, {"source_sentence": "handcrafted rug", "sentences": ["amaga cross bag - white", "handcrafted boots", "polyester top"]}, {"source_sentence": "bean bag", "sentences": ["bag", "v-neck dress", "bag"]}], "model-index": [{"name": "all-MiniLM-L6-v2-pair_score", "results": [{"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "sts dev", "type": "sts-dev"}, "metrics": [{"type": "pearson_cosine", "value": -0.13726370961372045, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": -0.16645918619928507, "name": "Spearman Cosine"}, {"type": "pearson_manhattan", "value": -0.1405300294713842, "name": "Pearson Manhattan"}, {"type": "spearman_manhattan", "value": -0.16334559546016153, "name": "Spearman Manhattan"}, {"type": "pearson_euclidean", "value": -0.1432496898556385, "name": "Pearson Euclidean"}, {"type": "spearman_euclidean", "value": -0.16645904911745338, "name": "Spearman Euclidean"}, {"type": "pearson_dot", "value": -0.13726370008450378, "name": "Pearson Dot"}, {"type": "spearman_dot", "value": -0.1664594964294906, "name": "Spearman Dot"}, {"type": "pearson_max", "value": -0.13726370008450378, "name": "Pearson Max"}, {"type": "spearman_max", "value": -0.16334559546016153, "name": "Spearman Max"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION",
"SEMANTIC_SIMILARITY"
] | 42,459 |
Helsinki-NLP/opus-mt-iir-en
|
Helsinki-NLP
|
translation
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bn",
"or",
"gu",
"mr",
"ur",
"hi",
"ps",
"os",
"as",
"si",
"iir",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2023-08-16T11:58:17+00:00
| 52 | 1 |
---
language:
- bn
- or
- gu
- mr
- ur
- hi
- ps
- os
- as
- si
- iir
- en
license: apache-2.0
tags:
- translation
---
### iir-eng
* source group: Indo-Iranian languages
* target group: English
* OPUS readme: [iir-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/iir-eng/README.md)
* model: transformer
* source language(s): asm awa ben bho gom guj hif_Latn hin jdt_Cyrl kur_Arab kur_Latn mai mar npi ori oss pan_Guru pes pes_Latn pes_Thaa pnb pus rom san_Deva sin snd_Arab tgk_Cyrl tly_Latn urd zza
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/iir-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/iir-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/iir-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2014-hineng.hin.eng | 8.1 | 0.324 |
| newsdev2019-engu-gujeng.guj.eng | 8.1 | 0.309 |
| newstest2014-hien-hineng.hin.eng | 12.1 | 0.380 |
| newstest2019-guen-gujeng.guj.eng | 6.0 | 0.280 |
| Tatoeba-test.asm-eng.asm.eng | 13.9 | 0.327 |
| Tatoeba-test.awa-eng.awa.eng | 7.0 | 0.219 |
| Tatoeba-test.ben-eng.ben.eng | 42.5 | 0.576 |
| Tatoeba-test.bho-eng.bho.eng | 27.3 | 0.452 |
| Tatoeba-test.fas-eng.fas.eng | 5.6 | 0.262 |
| Tatoeba-test.guj-eng.guj.eng | 15.9 | 0.350 |
| Tatoeba-test.hif-eng.hif.eng | 10.1 | 0.247 |
| Tatoeba-test.hin-eng.hin.eng | 36.5 | 0.544 |
| Tatoeba-test.jdt-eng.jdt.eng | 11.4 | 0.094 |
| Tatoeba-test.kok-eng.kok.eng | 6.6 | 0.256 |
| Tatoeba-test.kur-eng.kur.eng | 3.4 | 0.149 |
| Tatoeba-test.lah-eng.lah.eng | 17.4 | 0.301 |
| Tatoeba-test.mai-eng.mai.eng | 65.4 | 0.703 |
| Tatoeba-test.mar-eng.mar.eng | 22.5 | 0.468 |
| Tatoeba-test.multi.eng | 21.3 | 0.424 |
| Tatoeba-test.nep-eng.nep.eng | 3.4 | 0.185 |
| Tatoeba-test.ori-eng.ori.eng | 4.8 | 0.244 |
| Tatoeba-test.oss-eng.oss.eng | 1.6 | 0.173 |
| Tatoeba-test.pan-eng.pan.eng | 14.8 | 0.348 |
| Tatoeba-test.pus-eng.pus.eng | 1.1 | 0.182 |
| Tatoeba-test.rom-eng.rom.eng | 2.8 | 0.185 |
| Tatoeba-test.san-eng.san.eng | 2.8 | 0.185 |
| Tatoeba-test.sin-eng.sin.eng | 22.8 | 0.474 |
| Tatoeba-test.snd-eng.snd.eng | 8.2 | 0.287 |
| Tatoeba-test.tgk-eng.tgk.eng | 11.9 | 0.321 |
| Tatoeba-test.tly-eng.tly.eng | 0.9 | 0.076 |
| Tatoeba-test.urd-eng.urd.eng | 23.9 | 0.438 |
| Tatoeba-test.zza-eng.zza.eng | 0.6 | 0.098 |
### System Info:
- hf_name: iir-eng
- source_languages: iir
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/iir-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bn', 'or', 'gu', 'mr', 'ur', 'hi', 'ps', 'os', 'as', 'si', 'iir', 'en']
- src_constituents: {'pnb', 'gom', 'ben', 'hif_Latn', 'ori', 'guj', 'pan_Guru', 'snd_Arab', 'npi', 'mar', 'urd', 'pes', 'bho', 'kur_Arab', 'tgk_Cyrl', 'hin', 'kur_Latn', 'pes_Thaa', 'pus', 'san_Deva', 'oss', 'tly_Latn', 'jdt_Cyrl', 'asm', 'zza', 'rom', 'mai', 'pes_Latn', 'awa', 'sin'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/iir-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/iir-eng/opus2m-2020-08-01.test.txt
- src_alpha3: iir
- tgt_alpha3: eng
- short_pair: iir-en
- chrF2_score: 0.424
- bleu: 21.3
- brevity_penalty: 1.0
- ref_len: 67026.0
- src_name: Indo-Iranian languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: iir
- tgt_alpha2: en
- prefer_old: False
- long_pair: iir-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
| null |
Non_BioNLP
|
### iir-eng
* source group: Indo-Iranian languages
* target group: English
* OPUS readme: [iir-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/iir-eng/README.md)
* model: transformer
* source language(s): asm awa ben bho gom guj hif_Latn hin jdt_Cyrl kur_Arab kur_Latn mai mar npi ori oss pan_Guru pes pes_Latn pes_Thaa pnb pus rom san_Deva sin snd_Arab tgk_Cyrl tly_Latn urd zza
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/iir-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/iir-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/iir-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2014-hineng.hin.eng | 8.1 | 0.324 |
| newsdev2019-engu-gujeng.guj.eng | 8.1 | 0.309 |
| newstest2014-hien-hineng.hin.eng | 12.1 | 0.380 |
| newstest2019-guen-gujeng.guj.eng | 6.0 | 0.280 |
| Tatoeba-test.asm-eng.asm.eng | 13.9 | 0.327 |
| Tatoeba-test.awa-eng.awa.eng | 7.0 | 0.219 |
| Tatoeba-test.ben-eng.ben.eng | 42.5 | 0.576 |
| Tatoeba-test.bho-eng.bho.eng | 27.3 | 0.452 |
| Tatoeba-test.fas-eng.fas.eng | 5.6 | 0.262 |
| Tatoeba-test.guj-eng.guj.eng | 15.9 | 0.350 |
| Tatoeba-test.hif-eng.hif.eng | 10.1 | 0.247 |
| Tatoeba-test.hin-eng.hin.eng | 36.5 | 0.544 |
| Tatoeba-test.jdt-eng.jdt.eng | 11.4 | 0.094 |
| Tatoeba-test.kok-eng.kok.eng | 6.6 | 0.256 |
| Tatoeba-test.kur-eng.kur.eng | 3.4 | 0.149 |
| Tatoeba-test.lah-eng.lah.eng | 17.4 | 0.301 |
| Tatoeba-test.mai-eng.mai.eng | 65.4 | 0.703 |
| Tatoeba-test.mar-eng.mar.eng | 22.5 | 0.468 |
| Tatoeba-test.multi.eng | 21.3 | 0.424 |
| Tatoeba-test.nep-eng.nep.eng | 3.4 | 0.185 |
| Tatoeba-test.ori-eng.ori.eng | 4.8 | 0.244 |
| Tatoeba-test.oss-eng.oss.eng | 1.6 | 0.173 |
| Tatoeba-test.pan-eng.pan.eng | 14.8 | 0.348 |
| Tatoeba-test.pus-eng.pus.eng | 1.1 | 0.182 |
| Tatoeba-test.rom-eng.rom.eng | 2.8 | 0.185 |
| Tatoeba-test.san-eng.san.eng | 2.8 | 0.185 |
| Tatoeba-test.sin-eng.sin.eng | 22.8 | 0.474 |
| Tatoeba-test.snd-eng.snd.eng | 8.2 | 0.287 |
| Tatoeba-test.tgk-eng.tgk.eng | 11.9 | 0.321 |
| Tatoeba-test.tly-eng.tly.eng | 0.9 | 0.076 |
| Tatoeba-test.urd-eng.urd.eng | 23.9 | 0.438 |
| Tatoeba-test.zza-eng.zza.eng | 0.6 | 0.098 |
### System Info:
- hf_name: iir-eng
- source_languages: iir
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/iir-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bn', 'or', 'gu', 'mr', 'ur', 'hi', 'ps', 'os', 'as', 'si', 'iir', 'en']
- src_constituents: {'pnb', 'gom', 'ben', 'hif_Latn', 'ori', 'guj', 'pan_Guru', 'snd_Arab', 'npi', 'mar', 'urd', 'pes', 'bho', 'kur_Arab', 'tgk_Cyrl', 'hin', 'kur_Latn', 'pes_Thaa', 'pus', 'san_Deva', 'oss', 'tly_Latn', 'jdt_Cyrl', 'asm', 'zza', 'rom', 'mai', 'pes_Latn', 'awa', 'sin'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/iir-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/iir-eng/opus2m-2020-08-01.test.txt
- src_alpha3: iir
- tgt_alpha3: eng
- short_pair: iir-en
- chrF2_score: 0.424
- bleu: 21.3
- brevity_penalty: 1.0
- ref_len: 67026.0
- src_name: Indo-Iranian languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: iir
- tgt_alpha2: en
- prefer_old: False
- long_pair: iir-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
{"language": ["bn", "or", "gu", "mr", "ur", "hi", "ps", "os", "as", "si", "iir", "en"], "license": "apache-2.0", "tags": ["translation"]}
|
task
|
[
"TRANSLATION"
] | 42,460 |
seongil-dn/bge-m3-kor-retrieval-451949-bs64-book-50
|
seongil-dn
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:451949",
"loss:CachedMultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2101.06983",
"base_model:BAAI/bge-m3",
"base_model:finetune:BAAI/bge-m3",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-12-14T09:14:36Z |
2024-12-14T09:15:59+00:00
| 9 | 0 |
---
base_model: BAAI/bge-m3
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:451949
- loss:CachedMultipleNegativesRankingLoss
widget:
- source_sentence: 일본 재무성은 배우자의 연간 수입 상한액에 대해 얼마와 130만 엔 안을 제시했어?
sentences:
- 일본 정부는 저출산 대책을 강화할 재원 확보를 위해 기업의 육아지원 출연금을 증액하도록 경제계에 요구할 방침이다. 만약 이 방침이 실현되면
기업의 부담금은 연 최대 1,000억 엔 규모로 확대되고, 확대된 재원은 맞벌이 가구나 다자녀 가구의 육아지원에 사용될 계획이다. 이번 조치는
아베 신조 총리가 주도하는 ‘1억 총 활약 사회’ 실현을 위한 핵심 정책으로 활용될 계획이지만 경제계의 반발도 고려하지 않을 수 없는 상황이다.
경단련과 경제동우회에는 이미 정부 방침이 전달되었는데, 아베 총리는 2015년 9월 말에 발표한 아베노믹스의 2단계 방편인 ‘새로운 세 개의
화살’에서 현재 출산율인 1.4를 2020년대 중반까지 1.8로 상향시킨다는 목표를 밝힌 바 있다. 일본 정부가 기업에 요구하는 것은 연금특별회계의
아동 및 육아지원 계정에 대한 출연금 증액인데, 정부 안에 따르면 현재 월급과 상여금의 0.15%인 기업출연금은 2016년부터는 0.20%로
인상될 전망이다.
- 일본 재무성은 지금까지 배우자의 연간수입 상한액에 대해서 ‘150만 엔 안’과 ‘130만 엔 안’의 두 가지 안을 제시하였는데, 자민당의 세제조사회에서는
‘150만 엔 안’이 효과가 높을 것이라는 의견이 대다수를 차지했다. ‘130만 엔 안’의 경우 배우자의 연간수입이 130만 엔을 넘으면 연금과
의료보험의 사회보험료 부담이 발생하는 ‘130만 엔의 벽’과 중복되어, 수입을 그 이하로 줄이기 위해 근무시간을 줄일 가능성이 높아질 것으로
판단하였다. 자민당의 세제조사회의 노다 최고 고문은 23일 BS후지방송에 방송된 프로그램에서 소득세가 공제되는 배우자의 연간수입 상한액을 150만
엔으로 인상하는 것이 바람직하다는 입장을 표명하였다. 공명당 간부도 같은 날 ‘150만 엔 안’으로 인상하는 것을 우선적으로 검토하고 있다고
밝혔다. 일본 재무성은 소득세가 공제되는 배우자의 연간수입 상한액을 150만 엔으로 인상할 경우, 360만 가구가 감세 혜택을 받게 되는 데에
비해, 연간수입 상한액을 130만 엔으로 인상할 경우 감세 혜택을 받는 가구는 260만 가구에 머물 것으로 추계하였다.
- 지방자치단체의 행정에 인권개념을 도입하기 위해서는 우선 지속가능한 제도를 구축하는 것이 매우 중요하다. 제도에는 조례, 인력 또는 조직 등이
포함된다. 지방자치단체 인권제도의 도입은 2002년 울산광역시에서 ‘인권교육 및 인권보호활동 추진에 관한 조례’ 제정운동을 시작으로 지방자치단체
인권조례 운동이 모색되기 시작하였으며 2007년에는 경남 진주에서도 학계 연구자들과 시민단체 활동가들이 인권조례 제정활동을 벌이기 시작했다.
두 번의 실패 끝에 결국 2009년 5월 광주광역시에서 전국 최초로 ‘광주광역시 민주・인권・평화도시 육성 조례’를 제정하면서 인권조례 제정활동이
본격화된다. 2012년 국가인권위원회는 지역사회에서의 인권 보장 및 증진을 위하여 각 지자체의 장에게 인권 기본조례의 제・개정을 권고하며 인권제도의
도입을 급격히 확산시키는 견인차 역할을 담당한다. 2019년 현재 총 104곳의 지방자치단체(광역자치단체 17곳, 기초자치단체 87곳)에서
제정되었다.
- source_sentence: 경영방침을 자긍심을 심는 콘텐츠의 제작으로 정하여 실행해 나가는 방송사는 어디니?
sentences:
- 여기서 ‘사생활의 비밀’이란 사생활과 관련된 사사로운 자신만의 영역이 사회공동체의 일반적인 생활규범의 범위 내에서 본인의 의사에 반해서 타인에게
알려지지 않도록 할 수 있는 권리를 말한다. 구체적으로는 (i) 본인의 의사에 반하여 감시, 도청, 비밀녹음, 비밀촬영 등에 의하여 사생활의
비밀을 탐지하거나 사생활의 평온을 침입하여서는 아니 된다는 것, (ii) 사적 사항의 공개는 개인의 자율에 일임되어야 하며, 난처한 사사(私事)를
무단으로 공개하여서는 아니 된다는 것, (iii) 허위의 사실을 공표하거나 사실을 과장 왜곡되게 공표하여 특정인을 진실과 다르게 인식하도록
하여서는 아니 된다는 것, (iv) 성명, 초상, 경력 등이 사실과 일치하더라도 영리의 목적으로 사용하여서는 아니 된다는 것 등을 그 내용으로
한다. 또 ‘사생활의 자유’란 사생활을 자유롭게 형성해 나가고, 그 설계 및 내용에 대해서 외부로부터의 간섭을 받지 않을 권리를 말한다. 이에는
대체로 결혼, 피임, 낙태, 자녀의 양육, 교육, 성생활, 두발, 의복형태, 취미생활 등의 자유가 포함된다.
- 제가 이 자리에서 여러 번 강조합니다만 방송의 품질을 높이고 품격 있는 방송을 하도록 우리의 정책의지가 담겨 있어야 한다고 봅니다. 그래서
가뜩이나 광고시장이 위축되고 있기 때문에 모든 방송사들이 시청률에 매달릴 수밖에 없는 실정입니다. 그러면 시청률은 그저 이목을 끌고 검증되지
않는 자극적인 언사를 쓰는 방송프로그램에 더 시청률이 몰릴 수밖에 없습니다. 그런 유혹을 방송들이 철저하게 절제를 하면서 방송의 품격을 지켜
나갈 수 있도록 우리가 그렇게 유도해야 하는 것입니다. 특히 출연진을 잘 검증하는 장치가 과연 방송사에서 자율적으로 잘 마련되어 있고, 또
그것이 잘 이루어지고 있는지를 철저하게 점검하는 부분들을 반드시 방송사들이, 사업자들이 깨닫고 자정하는 노력이 있어야 할 것으로 봅니다. 그래서
그런 부분에 대한 우리의 정책의지가 발휘될 수 있도록 다시 한 번 주문합니다. 이상입니다.
- 하지만 공정성 과 객관성 확보와 오보·막말 방지에 대한 우리 채널A의 의지는 그 어느 때보다 확고합니다. 지난해부터 그런 것들에 대해서 저뿐만
아니라 많은 조직원들이 좀 더 강하게 문제제기를 하고 있고 고쳐 나가고 노력하고 있고, 그래서 제도적 완비에도 최선을 다하려고 노력하고 있습니다.
채널A는 매년 3가지 경영방침을 정해서 이를 우선적으로 실천해 나가고 있습니다. 지난해 3대 경영방침 중 첫 번째가 퀄리티 저널리즘의 구현이었습니다.
그리고 또 올해에는 역시 첫 번째가 채널A의 자긍심을 심는 콘텐츠를 만들자는 의미로 A 프라이드 콘텐츠의 확산을 우리 3대 경영방침으로 삼고
있습니다. 또 새롭게 설정한 채널A의 4대 비전 가운데에서 제일 첫 번째가 품격을 담는 채널A이고 두 번째가 공정하고 건전한 여론을 담는 채널A입니다.
이 모든 것들이 우리 채널A의 콘텐츠의 공정성과 객관성을 최대한 담고 오보와 막말을 모두 덜어내자는 의지의 표현이고 또 반드시 실천해 나가야
되는 채널A의 숙제이자 목표입니다. 제도적으로도 보완과 개선을 계속 해 나가고 있습니다.
- source_sentence: 1999년에 구축한 국방조달관리정보체계를 토대로 하여 중앙조달 전자입찰체계를 개발운용하고 있는 기관은 어디야?
sentences:
- 국방부조달본부는 1995년‘전자거래 시범기관’으로 지정된 이후, 1999년 국방조달관리정보체계(DPAMIS)를 구축하고 이를 기반으로 중앙조달
전자입찰체계를 개발운용하고 있으며, 부대조달을 포함한 전군 단일 전자입찰체계를 개발중에 있다. 국방조달행정의 편의성, 투명성 및 대민 서비스
개선요구가 증대되는 등 전자상거래의 필요성이 제기됨에 따라 2000년 11월 중앙조달 전자입찰체계를 구축완료하고, 2001년 4월부터 소량·소액
품목을 대상으로 부분적으로 전자입찰을 실시하였으며, 2002년부터는 비밀사업과 다자간 협상사업 및 법적으로 전자상거래가 제한되는 외자분야를
제외한 전 품목을 대상으로 전자입찰을 시행하고 있다. 또한, 2002년부터는 2003년도 국방조달분야 전자입찰의 전면시행을 목표로 중앙조달
전자입찰체계 확대·보완사업을 추진하고 있는 바, 이에는 부대조달 전자입찰체계 개발을 비롯하여 조달원 통합관리, 원가자료 획득 및 산정기능,
제증명 신청 및 발급 등 민원 서비스체계가 포함되어 있다.
- 조달청은 정부ㆍ공공기관에서 필요한 물자와 용역 등을 제때 적정한 가격으로 구매ㆍ공급할 수 있게 하는 국가종합전자조달시스템(나라장터, www.g2b.go.kr)을
구축ㆍ운영하고 있다. 이 서비스로 수요기관ㆍ조달업체 등록, 입찰, 계약, 검사, 대금 지급 등 정부ㆍ공공조달 전 과정을 인터넷으로 처리하고
확인할 수 있다. 국가종합전자조달 서비스로 입찰, 계약, 지급 등 조달 업무 전 과정에 대한 온라인 처리, 진행 상황의 실시간 모니터링이 가능해졌으며,
2003년 서비스 개시 이후 전자입찰을 통한 거래 실적이 매년 증가하는 추세다. 2017년에는 국가종합조달서비스의 안정적인 운영과 전문성을
확보하기 위한 전자조달센터를 지정해RFID 등 8개 시스템의 운영ㆍ유지보수 사업에 대한 전자조달지원센터 지정과 이관을 추진했다. 조달통계에
관한 빅데이터 분석 시스템을 구축해 공공조달업무 효율화를 지원하고, 향상된 보안성으로 빠른 실행을 지원하는 안전입찰 2.0을 도입함으로써 이용자
만족도 및 보안성을 높이고 있다.
- 북한 핵전략에 대한 연구는 어떤 효과를 갖는가. 우선 북한의 핵전략을 파악함으로써 북한의 핵위협에 대해 보다 효과적인 군사적 대응이 가능하게
된다. 현재 우리는 북한의 핵전략에 대해 지극히 초보적인 지식만을 갖고 있으며, 따라서 이에 대한 대응책도 유효하거나 충분치 않을 가능성이
높다. 북한의 핵전략을 파악한다는 것은 북한이 핵무기의 수량을 얼마나 증대할 것인지, 핵무기의 종류와 핵무기를 어떤 상황에서 사용할 것인지,
핵무기를 어떤 용도로 사용할 것인지를 이해하는 것이다. 이렇게 북한의 핵전략을 이해할 때, 북한의 핵사용 또는 핵사용 위협을 성공적으로 억제할
가능성도 높아질 것이다. 또한 북한의 핵전략에 대한 이해는 우리의 대북정책 또는 북핵정책에 큰 영향을 미칠 것이다. 사실 현재 북핵에 대한
국내의 논의는 대부분 북핵을 어떻게 정치‧외교적으로 제거할 것인지에 대한 비핵화문제에 집중된다. 학계에서 북한의 핵무기 사용과 사용위협에 대한
군사안보적 대응에 대한 연구와 논의는 거의 전무하거나, 매우 초보적인 단계에 머물고 있다고 해도 과언이 아니다.
- source_sentence: 1960년부터 1970년대 사회주의권은 물론 비사회주의권의 개발도상국을 지원하며 제3세계 리더 역할을 한 국가는
어디니?
sentences:
- 1974년 포르투갈에서부터 시작한 민주화의 제3의 물결은 남유럽과 중남미를 거쳐 아시아, 동유럽, 아프리카 등으로 20여 년 동안 확산되었다.
1980년대 말 냉전의 해체는 이러한 민주화의 물결이 붕괴한 사회주의 국가들에게도 영향을 미쳐 자본주의를 기반으로 한 민주주의와 경쟁할 정치체제는
역사상 더 이상 존재하지 않을 것임을 선포하게 했다. 하지만 새로운 세기에 접어들어 모두를 의아하게 만든 현실은 여전히 지금도 전 세계 절반
이상의 국가들이 민주주의가 아닌 권위주의 체제를 유지하고 있는 것이었다. 권위주의 체제의 붕괴는 당연히 민주주의 체제의 수립으로 이어질 것이라는
낙관적 사고에 커다란 의구심을 던지게 만든 현실이자, 기존 권위주의 체제가 붕괴하고 새로이 등장하는 체제가 또 다른 유형의 권위주의일 수 있음을
깨닫게 해준 현실이었다. 대표적으로 사회주의권 붕괴 이후 동유럽에 등장한 정치체제의 다수는 구 공산당 간부들에 의해 지배되는 새로운 유형의
권위주의 체제이거나 벨라루스, 우즈베키스탄, 아제르바이잔처럼 사회주의 국가 시절보다 더 폭압적인 독재체제였다.
- 정부는 성장동력 확충과 사회문제 해결에 필요한 국가 전략기술 분야를 집중적으로 지원하기 위해 「국가전략프로젝트」 사업을 신규로 추진할 계획이다.
동 사업은 「성장동력 분야」와 「삶의 질 및 국민행복 분야」의 9개 프로젝트로 구성된다. 성장동력 분야는 자율주행차 ․ 스마트시티 ․ 가상증강현실
․ 경량소재 ․ 인공지능 등 5개 프로젝트가, 삶의 질 및 국민행복 분야는 미세먼지 ․ 탄소자원화 ․ 정밀의료 ․ 바이오 신약 등 4개 프로젝트가
포함된다. 미래창조과학부는 국가전략프로젝트 사업의 총사업비를 약 1조 6,000억원으로 예상하고 있다. 2017년 예산안은 300억원이며,
프로젝트별 예산은 7개 부처의 예산안에 편성되어 있다. 9개 프로젝트 중 예비타당성조사가 진행 중인 5개 프로젝트의 예산은 세부시행계획 수립비용으로
편성하였다.
- '1960~70년대 중국은 제3세계의 리더로서 특히 아프리카 신생독립국을 포함한 사회주의권은 물론 비사회주의권 개발도상국을 지원했다. 1960년
최초로 기니에 무이자 차관을 제공했으며 1960년대 후반 탄자니아와 잠비아를 연결하는 철로를 건설하는 등 제3세계 원조를 위한 물자와 인력을
제공했다, 쿠웨이트, 사우디아라비아, 아랍에미리트 등의 중동 이슬람 국가들은 1970년대 이후부터 중동 국가 결속을 위한 지역 차원의 지원을
시작했다. 쿠웨이트, 사우디아라비아, 아랍에미리트 등의 중동 이슬람 국가들은 1970년대 이후부터 중동 국가 결속을 위한 지역 차원의 지원을
시작했다. 1961년 쿠웨이트는 아랍경제개발펀드(The Kuwait Fund for Arab Economic Development)를 설립했으며,
1970년 중반 이후 이슬람개발은행(IsDB: Islamic Development Bank)과 아랍경제개발은행(BADEA: Arab Bank
for Economic Development in Africa) 등을 운영했다.'
- source_sentence: 실제적 발달 수준과 잠재적 발단 수준 사이를 역동적인 공간으로 이야기하는 영역은 뭐야?
sentences:
- 세 번째는 비공식적 및 공식적 지원 관점으로 아동기를 역동적인 관계의 복합체로 인식하며, 역동적인 상호관계는 만족스럽고 성공적인 아동기에 필수요소이다.
이러한 상호관계의 범위는 아동 양육과 보호의 주 제공자인 부모에서부터 아동 권리를 최종적으로 보장하는 역할을 하는 국가에까지 이른다. 아동에게
필수적인 지원과 서비스는 가족의 사회 관계망 및 가족과 지역사회를 통한 비공식적 지원이나 제 3섹터 및 영리 부문 및 국가와 기관들을 통한
공식적 지원으로 전달된다. 비공식적 및 공식적 지원은 아동이 필요로 하고 혜택을 받을 수 있는 지원과 서비스를 가능하게 하는 전달자 역할을
한다. 이러한 ‘사회적 자본’을 지원하는 것이 국가 아동 전략의 핵심 주제이다. 이렇게 다양하고 서로 상호작용하는 지원의 원천으로부터 아동은
앞서 말한 9개의 발달 영역에서 성장하기 위한 도움을 받는다. 모든 아동은 좋은 교육과 양질의 의료 서비스에 대한 접근권 등 기본적인 지원과
서비스를 필요로 한다. 일부 아동은 빈곤이나 장애, 소수 인종 및 문화 집단, 양육과 보호의 필요성, 비행 및 자해 행동 등을 이유로 추가적인
지원과 서비스를 필요로 한다.
- '하브루타에 임하는 학생들의 태도는 다양하다. 기본적인 학습에 대한 참여율에 따라 상당한 차이를 보인다. 앞에서 언급한 인재시교에 다다를 때까지
기다려주고 관심가져주며, 칭찬과 극려의 말로 지지 할 수 있어야 한다. 비고츠키(Vygotsky)는 근접 발달영역(the zone of proximal
development: ZPD)을“독자적으로 문제를 해결함으로써 결정되는 실제적 발달 수준과 성인의 안내나 보다 능력 있는 또래들과 협동하여
문제를 해결함으로써 결정되는 잠재적 발달 수준 간의 거리”로 규정한다. 근접발달 영역(the zone of proximal development)은
실제적 발달 수준(actualdevelopmental level)과 잠재적 발달수준(potential developmental level)사이를
역동적인 공간으로 이야기 한다. 즉 하브루타는 소속한 학습자(친구) 상호작용을 통하여 잠재적 발달수준(potential developmental
level)까지 도달하는 것이다. 이러한 작용에 꼭 필요한 것 중 하나는 학습자가 수업에 임하는 태도이다. 즉 학습자의 동기부여를 어떻게 불러일으킬
수 있느냐가 관권이다.'
- KTR이 영국의 CE인증기관인 HPi Verification Service Ltd(이하 HPiVS) 와 협력을 강화하기로 했다. 최형기 KTR
원장과 Mr. Alasdair Lewis Reay HPiVS 원장은 유럽으로 수출하는 압력플랜트 설비, 용접, 산업용 기계류에 대한 CE인증업무
협력을 위해 11월 25일 과천청사 5층 아리랑홀에서 협약을 체결했다. KTR은 국내 압력장비 및 기계류 인증 관련 업계의 인증 수요가 증가함에
따라, 현지 기관과의 업무협력을 강화해 인증사업 체계를 확립하기 위해 협약을 체결했다. 협약 체결 후 HPiVS는 KTR 과천청사 내 주요
시험실을 견학하며 연구원 현황을 파악하고 KTR과의 사업 협력 방안에 대해 논의하는 시간을 가졌다. HPiVS는 유럽위원회로부터 인정받은 영국의
유럽 인증기관으로서 플랜트 압력설비, 산업용 기계류, 레저용 장비, 단순압력장비 4개 제품군의 CE인증 권한을 지니고 있다.
---
# SentenceTransformer based on BAAI/bge-m3
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) <!-- at revision 5617a9f61b028005a4858fdac845db406aefb181 -->
- **Maximum Sequence Length:** 1024 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("seongil-dn/bge-m3-kor-retrieval-451949-bs64-book-50")
# Run inference
sentences = [
'실제적 발달 수준과 잠재적 발단 수준 사이를 역동적인 공간으로 이야기하는 영역은 뭐야?',
'하브루타에 임하는 학생들의 태도는 다양하다. 기본적인 학습에 대한 참여율에 따라 상당한 차이를 보인다. 앞에서 언급한 인재시교에 다다를 때까지 기다려주고 관심가져주며, 칭찬과 극려의 말로 지지 할 수 있어야 한다. 비고츠키(Vygotsky)는 근접 발달영역(the zone of proximal development: ZPD)을“독자적으로 문제를 해결함으로써 결정되는 실제적 발달 수준과 성인의 안내나 보다 능력 있는 또래들과 협동하여 문제를 해결함으로써 결정되는 잠재적 발달 수준 간의 거리”로 규정한다. 근접발달 영역(the zone of proximal development)은 실제적 발달 수준(actualdevelopmental level)과 잠재적 발달수준(potential developmental level)사이를 역동적인 공간으로 이야기 한다. 즉 하브루타는 소속한 학습자(친구) 상호작용을 통하여 잠재적 발달수준(potential developmental level)까지 도달하는 것이다. 이러한 작용에 꼭 필요한 것 중 하나는 학습자가 수업에 임하는 태도이다. 즉 학습자의 동기부여를 어떻게 불러일으킬 수 있느냐가 관권이다.',
'세 번째는 비공식적 및 공식적 지원 관점으로 아동기를 역동적인 관계의 복합체로 인식하며, 역동적인 상호관계는 만족스럽고 성공적인 아동기에 필수요소이다. 이러한 상호관계의 범위는 아동 양육과 보호의 주 제공자인 부모에서부터 아동 권리를 최종적으로 보장하는 역할을 하는 국가에까지 이른다. 아동에게 필수적인 지원과 서비스는 가족의 사회 관계망 및 가족과 지역사회를 통한 비공식적 지원이나 제 3섹터 및 영리 부문 및 국가와 기관들을 통한 공식적 지원으로 전달된다. 비공식적 및 공식적 지원은 아동이 필요로 하고 혜택을 받을 수 있는 지원과 서비스를 가능하게 하는 전달자 역할을 한다. 이러한 ‘사회적 자본’을 지원하는 것이 국가 아동 전략의 핵심 주제이다. 이렇게 다양하고 서로 상호작용하는 지원의 원천으로부터 아동은 앞서 말한 9개의 발달 영역에서 성장하기 위한 도움을 받는다. 모든 아동은 좋은 교육과 양질의 의료 서비스에 대한 접근권 등 기본적인 지원과 서비스를 필요로 한다. 일부 아동은 빈곤이나 장애, 소수 인종 및 문화 집단, 양육과 보호의 필요성, 비행 및 자해 행동 등을 이유로 추가적인 지원과 서비스를 필요로 한다.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 64
- `learning_rate`: 3e-05
- `num_train_epochs`: 1
- `max_steps`: 50
- `warmup_ratio`: 0.05
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 3e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: 50
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.05
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.0019 | 1 | 0.9318 |
| 0.0037 | 2 | 0.9071 |
| 0.0056 | 3 | 0.9399 |
| 0.0075 | 4 | 0.8293 |
| 0.0094 | 5 | 0.7001 |
| 0.0112 | 6 | 0.6959 |
| 0.0131 | 7 | 0.5847 |
| 0.0150 | 8 | 0.4753 |
| 0.0169 | 9 | 0.5343 |
| 0.0187 | 10 | 0.4751 |
| 0.0206 | 11 | 0.4502 |
| 0.0225 | 12 | 0.4661 |
| 0.0243 | 13 | 0.4421 |
| 0.0262 | 14 | 0.4721 |
| 0.0281 | 15 | 0.4191 |
| 0.0300 | 16 | 0.4317 |
| 0.0318 | 17 | 0.4206 |
| 0.0337 | 18 | 0.3953 |
| 0.0356 | 19 | 0.3775 |
| 0.0375 | 20 | 0.307 |
| 0.0393 | 21 | 0.3553 |
| 0.0412 | 22 | 0.3592 |
| 0.0431 | 23 | 0.341 |
| 0.0449 | 24 | 0.4565 |
| 0.0468 | 25 | 0.3349 |
| 0.0487 | 26 | 0.3669 |
| 0.0506 | 27 | 0.35 |
| 0.0524 | 28 | 0.348 |
| 0.0543 | 29 | 0.3434 |
| 0.0562 | 30 | 0.3778 |
| 0.0581 | 31 | 0.3134 |
| 0.0599 | 32 | 0.3695 |
| 0.0618 | 33 | 0.3719 |
| 0.0637 | 34 | 0.3299 |
| 0.0655 | 35 | 0.3336 |
| 0.0674 | 36 | 0.3491 |
| 0.0693 | 37 | 0.3609 |
| 0.0712 | 38 | 0.2784 |
| 0.0730 | 39 | 0.3002 |
| 0.0749 | 40 | 0.3753 |
| 0.0768 | 41 | 0.26 |
| 0.0787 | 42 | 0.2543 |
| 0.0805 | 43 | 0.274 |
| 0.0824 | 44 | 0.2681 |
| 0.0843 | 45 | 0.2977 |
| 0.0861 | 46 | 0.281 |
| 0.0880 | 47 | 0.2937 |
| 0.0899 | 48 | 0.2997 |
| 0.0918 | 49 | 0.3303 |
| 0.0936 | 50 | 0.2493 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.2.1
- Transformers: 4.44.2
- PyTorch: 2.3.1+cu121
- Accelerate: 1.1.1
- Datasets: 2.21.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CachedMultipleNegativesRankingLoss
```bibtex
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on BAAI/bge-m3
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) <!-- at revision 5617a9f61b028005a4858fdac845db406aefb181 -->
- **Maximum Sequence Length:** 1024 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("seongil-dn/bge-m3-kor-retrieval-451949-bs64-book-50")
# Run inference
sentences = [
'실제적 발달 수준과 잠재적 발단 수준 사이를 역동적인 공간으로 이야기하는 영역은 뭐야?',
'하브루타에 임하는 학생들의 태도는 다양하다. 기본적인 학습에 대한 참여율에 따라 상당한 차이를 보인다. 앞에서 언급한 인재시교에 다다를 때까지 기다려주고 관심가져주며, 칭찬과 극려의 말로 지지 할 수 있어야 한다. 비고츠키(Vygotsky)는 근접 발달영역(the zone of proximal development: ZPD)을“독자적으로 문제를 해결함으로써 결정되는 실제적 발달 수준과 성인의 안내나 보다 능력 있는 또래들과 협동하여 문제를 해결함으로써 결정되는 잠재적 발달 수준 간의 거리”로 규정한다. 근접발달 영역(the zone of proximal development)은 실제적 발달 수준(actualdevelopmental level)과 잠재적 발달수준(potential developmental level)사이를 역동적인 공간으로 이야기 한다. 즉 하브루타는 소속한 학습자(친구) 상호작용을 통하여 잠재적 발달수준(potential developmental level)까지 도달하는 것이다. 이러한 작용에 꼭 필요한 것 중 하나는 학습자가 수업에 임하는 태도이다. 즉 학습자의 동기부여를 어떻게 불러일으킬 수 있느냐가 관권이다.',
'세 번째는 비공식적 및 공식적 지원 관점으로 아동기를 역동적인 관계의 복합체로 인식하며, 역동적인 상호관계는 만족스럽고 성공적인 아동기에 필수요소이다. 이러한 상호관계의 범위는 아동 양육과 보호의 주 제공자인 부모에서부터 아동 권리를 최종적으로 보장하는 역할을 하는 국가에까지 이른다. 아동에게 필수적인 지원과 서비스는 가족의 사회 관계망 및 가족과 지역사회를 통한 비공식적 지원이나 제 3섹터 및 영리 부문 및 국가와 기관들을 통한 공식적 지원으로 전달된다. 비공식적 및 공식적 지원은 아동이 필요로 하고 혜택을 받을 수 있는 지원과 서비스를 가능하게 하는 전달자 역할을 한다. 이러한 ‘사회적 자본’을 지원하는 것이 국가 아동 전략의 핵심 주제이다. 이렇게 다양하고 서로 상호작용하는 지원의 원천으로부터 아동은 앞서 말한 9개의 발달 영역에서 성장하기 위한 도움을 받는다. 모든 아동은 좋은 교육과 양질의 의료 서비스에 대한 접근권 등 기본적인 지원과 서비스를 필요로 한다. 일부 아동은 빈곤이나 장애, 소수 인종 및 문화 집단, 양육과 보호의 필요성, 비행 및 자해 행동 등을 이유로 추가적인 지원과 서비스를 필요로 한다.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 64
- `learning_rate`: 3e-05
- `num_train_epochs`: 1
- `max_steps`: 50
- `warmup_ratio`: 0.05
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 3e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: 50
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.05
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.0019 | 1 | 0.9318 |
| 0.0037 | 2 | 0.9071 |
| 0.0056 | 3 | 0.9399 |
| 0.0075 | 4 | 0.8293 |
| 0.0094 | 5 | 0.7001 |
| 0.0112 | 6 | 0.6959 |
| 0.0131 | 7 | 0.5847 |
| 0.0150 | 8 | 0.4753 |
| 0.0169 | 9 | 0.5343 |
| 0.0187 | 10 | 0.4751 |
| 0.0206 | 11 | 0.4502 |
| 0.0225 | 12 | 0.4661 |
| 0.0243 | 13 | 0.4421 |
| 0.0262 | 14 | 0.4721 |
| 0.0281 | 15 | 0.4191 |
| 0.0300 | 16 | 0.4317 |
| 0.0318 | 17 | 0.4206 |
| 0.0337 | 18 | 0.3953 |
| 0.0356 | 19 | 0.3775 |
| 0.0375 | 20 | 0.307 |
| 0.0393 | 21 | 0.3553 |
| 0.0412 | 22 | 0.3592 |
| 0.0431 | 23 | 0.341 |
| 0.0449 | 24 | 0.4565 |
| 0.0468 | 25 | 0.3349 |
| 0.0487 | 26 | 0.3669 |
| 0.0506 | 27 | 0.35 |
| 0.0524 | 28 | 0.348 |
| 0.0543 | 29 | 0.3434 |
| 0.0562 | 30 | 0.3778 |
| 0.0581 | 31 | 0.3134 |
| 0.0599 | 32 | 0.3695 |
| 0.0618 | 33 | 0.3719 |
| 0.0637 | 34 | 0.3299 |
| 0.0655 | 35 | 0.3336 |
| 0.0674 | 36 | 0.3491 |
| 0.0693 | 37 | 0.3609 |
| 0.0712 | 38 | 0.2784 |
| 0.0730 | 39 | 0.3002 |
| 0.0749 | 40 | 0.3753 |
| 0.0768 | 41 | 0.26 |
| 0.0787 | 42 | 0.2543 |
| 0.0805 | 43 | 0.274 |
| 0.0824 | 44 | 0.2681 |
| 0.0843 | 45 | 0.2977 |
| 0.0861 | 46 | 0.281 |
| 0.0880 | 47 | 0.2937 |
| 0.0899 | 48 | 0.2997 |
| 0.0918 | 49 | 0.3303 |
| 0.0936 | 50 | 0.2493 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.2.1
- Transformers: 4.44.2
- PyTorch: 2.3.1+cu121
- Accelerate: 1.1.1
- Datasets: 2.21.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CachedMultipleNegativesRankingLoss
```bibtex
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "BAAI/bge-m3", "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:451949", "loss:CachedMultipleNegativesRankingLoss"], "widget": [{"source_sentence": "일본 재무성은 배우자의 연간 수입 상한액에 대해 얼마와 130만 엔 안을 제시했어?", "sentences": ["일본 정부는 저출산 대책을 강화할 재원 확보를 위해 기업의 육아지원 출연금을 증액하도록 경제계에 요구할 방침이다. 만약 이 방침이 실현되면 기업의 부담금은 연 최대 1,000억 엔 규모로 확대되고, 확대된 재원은 맞벌이 가구나 다자녀 가구의 육아지원에 사용될 계획이다. 이번 조치는 아베 신조 총리가 주도하는 ‘1억 총 활약 사회’ 실현을 위한 핵심 정책으로 활용될 계획이지만 경제계의 반발도 고려하지 않을 수 없는 상황이다. 경단련과 경제동우회에는 이미 정부 방침이 전달되었는데, 아베 총리는 2015년 9월 말에 발표한 아베노믹스의 2단계 방편인 ‘새로운 세 개의 화살’에서 현재 출산율인 1.4를 2020년대 중반까지 1.8로 상향시킨다는 목표를 밝힌 바 있다. 일본 정부가 기업에 요구하는 것은 연금특별회계의 아동 및 육아지원 계정에 대한 출연금 증액인데, 정부 안에 따르면 현재 월급과 상여금의 0.15%인 기업출연금은 2016년부터는 0.20%로 인상될 전망이다.", "일본 재무성은 지금까지 배우자의 연간수입 상한액에 대해서 ‘150만 엔 안’과 ‘130만 엔 안’의 두 가지 안을 제시하였는데, 자민당의 세제조사회에서는 ‘150만 엔 안’이 효과가 높을 것이라는 의견이 대다수를 차지했다. ‘130만 엔 안’의 경우 배우자의 연간수입이 130만 엔을 넘으면 연금과 의료보험의 사회보험료 부담이 발생하는 ‘130만 엔의 벽’과 중복되어, 수입을 그 이하로 줄이기 위해 근무시간을 줄일 가능성이 높아질 것으로 판단하였다. 자민당의 세제조사회의 노다 최고 고문은 23일 BS후지방송에 방송된 프로그램에서 소득세가 공제되는 배우자의 연간수입 상한액을 150만 엔으로 인상하는 것이 바람직하다는 입장을 표명하였다. 공명당 간부도 같은 날 ‘150만 엔 안’으로 인상하는 것을 우선적으로 검토하고 있다고 밝혔다. 일본 재무성은 소득세가 공제되는 배우자의 연간수입 상한액을 150만 엔으로 인상할 경우, 360만 가구가 감세 혜택을 받게 되는 데에 비해, 연간수입 상한액을 130만 엔으로 인상할 경우 감세 혜택을 받는 가구는 260만 가구에 머물 것으로 추계하였다.", "지방자치단체의 행정에 인권개념을 도입하기 위해서는 우선 지속가능한 제도를 구축하는 것이 매우 중요하다. 제도에는 조례, 인력 또는 조직 등이 포함된다. 지방자치단체 인권제도의 도입은 2002년 울산광역시에서 ‘인권교육 및 인권보호활동 추진에 관한 조례’ 제정운동을 시작으로 지방자치단체 인권조례 운동이 모색되기 시작하였으며 2007년에는 경남 진주에서도 학계 연구자들과 시민단체 활동가들이 인권조례 제정활동을 벌이기 시작했다. 두 번의 실패 끝에 결국 2009년 5월 광주광역시에서 전국 최초로 ‘광주광역시 민주・인권・평화도시 육성 조례’를 제정하면서 인권조례 제정활동이 본격화된다. 2012년 국가인권위원회는 지역사회에서의 인권 보장 및 증진을 위하여 각 지자체의 장에게 인권 기본조례의 제・개정을 권고하며 인권제도의 도입을 급격히 확산시키는 견인차 역할을 담당한다. 2019년 현재 총 104곳의 지방자치단체(광역자치단체 17곳, 기초자치단체 87곳)에서 제정되었다."]}, {"source_sentence": "경영방침을 자긍심을 심는 콘텐츠의 제작으로 정하여 실행해 나가는 방송사는 어디니?", "sentences": ["여기서 ‘사생활의 비밀’이란 사생활과 관련된 사사로운 자신만의 영역이 사회공동체의 일반적인 생활규범의 범위 내에서 본인의 의사에 반해서 타인에게 알려지지 않도록 할 수 있는 권리를 말한다. 구체적으로는 (i) 본인의 의사에 반하여 감시, 도청, 비밀녹음, 비밀촬영 등에 의하여 사생활의 비밀을 탐지하거나 사생활의 평온을 침입하여서는 아니 된다는 것, (ii) 사적 사항의 공개는 개인의 자율에 일임되어야 하며, 난처한 사사(私事)를 무단으로 공개하여서는 아니 된다는 것, (iii) 허위의 사실을 공표하거나 사실을 과장 왜곡되게 공표하여 특정인을 진실과 다르게 인식하도록 하여서는 아니 된다는 것, (iv) 성명, 초상, 경력 등이 사실과 일치하더라도 영리의 목적으로 사용하여서는 아니 된다는 것 등을 그 내용으로 한다. 또 ‘사생활의 자유’란 사생활을 자유롭게 형성해 나가고, 그 설계 및 내용에 대해서 외부로부터의 간섭을 받지 않을 권리를 말한다. 이에는 대체로 결혼, 피임, 낙태, 자녀의 양육, 교육, 성생활, 두발, 의복형태, 취미생활 등의 자유가 포함된다.", "제가 이 자리에서 여러 번 강조합니다만 방송의 품질을 높이고 품격 있는 방송을 하도록 우리의 정책의지가 담겨 있어야 한다고 봅니다. 그래서 가뜩이나 광고시장이 위축되고 있기 때문에 모든 방송사들이 시청률에 매달릴 수밖에 없는 실정입니다. 그러면 시청률은 그저 이목을 끌고 검증되지 않는 자극적인 언사를 쓰는 방송프로그램에 더 시청률이 몰릴 수밖에 없습니다. 그런 유혹을 방송들이 철저하게 절제를 하면서 방송의 품격을 지켜 나갈 수 있도록 우리가 그렇게 유도해야 하는 것입니다. 특히 출연진을 잘 검증하는 장치가 과연 방송사에서 자율적으로 잘 마련되어 있고, 또 그것이 잘 이루어지고 있는지를 철저하게 점검하는 부분들을 반드시 방송사들이, 사업자들이 깨닫고 자정하는 노력이 있어야 할 것으로 봅니다. 그래서 그런 부분에 대한 우리의 정책의지가 발휘될 수 있도록 다시 한 번 주문합니다. 이상입니다.", "하지만 공정성 과 객관성 확보와 오보·막말 방지에 대한 우리 채널A의 의지는 그 어느 때보다 확고합니다. 지난해부터 그런 것들에 대해서 저뿐만 아니라 많은 조직원들이 좀 더 강하게 문제제기를 하고 있고 고쳐 나가고 노력하고 있고, 그래서 제도적 완비에도 최선을 다하려고 노력하고 있습니다. 채널A는 매년 3가지 경영방침을 정해서 이를 우선적으로 실천해 나가고 있습니다. 지난해 3대 경영방침 중 첫 번째가 퀄리티 저널리즘의 구현이었습니다. 그리고 또 올해에는 역시 첫 번째가 채널A의 자긍심을 심는 콘텐츠를 만들자는 의미로 A 프라이드 콘텐츠의 확산을 우리 3대 경영방침으로 삼고 있습니다. 또 새롭게 설정한 채널A의 4대 비전 가운데에서 제일 첫 번째가 품격을 담는 채널A이고 두 번째가 공정하고 건전한 여론을 담는 채널A입니다. 이 모든 것들이 우리 채널A의 콘텐츠의 공정성과 객관성을 최대한 담고 오보와 막말을 모두 덜어내자는 의지의 표현이고 또 반드시 실천해 나가야 되는 채널A의 숙제이자 목표입니다. 제도적으로도 보완과 개선을 계속 해 나가고 있습니다."]}, {"source_sentence": "1999년에 구축한 국방조달관리정보체계를 토대로 하여 중앙조달 전자입찰체계를 개발운용하고 있는 기관은 어디야?", "sentences": ["국방부조달본부는 1995년‘전자거래 시범기관’으로 지정된 이후, 1999년 국방조달관리정보체계(DPAMIS)를 구축하고 이를 기반으로 중앙조달 전자입찰체계를 개발운용하고 있으며, 부대조달을 포함한 전군 단일 전자입찰체계를 개발중에 있다. 국방조달행정의 편의성, 투명성 및 대민 서비스 개선요구가 증대되는 등 전자상거래의 필요성이 제기됨에 따라 2000년 11월 중앙조달 전자입찰체계를 구축완료하고, 2001년 4월부터 소량·소액 품목을 대상으로 부분적으로 전자입찰을 실시하였으며, 2002년부터는 비밀사업과 다자간 협상사업 및 법적으로 전자상거래가 제한되는 외자분야를 제외한 전 품목을 대상으로 전자입찰을 시행하고 있다. 또한, 2002년부터는 2003년도 국방조달분야 전자입찰의 전면시행을 목표로 중앙조달 전자입찰체계 확대·보완사업을 추진하고 있는 바, 이에는 부대조달 전자입찰체계 개발을 비롯하여 조달원 통합관리, 원가자료 획득 및 산정기능, 제증명 신청 및 발급 등 민원 서비스체계가 포함되어 있다.", "조달청은 정부ㆍ공공기관에서 필요한 물자와 용역 등을 제때 적정한 가격으로 구매ㆍ공급할 수 있게 하는 국가종합전자조달시스템(나라장터, www.g2b.go.kr)을 구축ㆍ운영하고 있다. 이 서비스로 수요기관ㆍ조달업체 등록, 입찰, 계약, 검사, 대금 지급 등 정부ㆍ공공조달 전 과정을 인터넷으로 처리하고 확인할 수 있다. 국가종합전자조달 서비스로 입찰, 계약, 지급 등 조달 업무 전 과정에 대한 온라인 처리, 진행 상황의 실시간 모니터링이 가능해졌으며, 2003년 서비스 개시 이후 전자입찰을 통한 거래 실적이 매년 증가하는 추세다. 2017년에는 국가종합조달서비스의 안정적인 운영과 전문성을 확보하기 위한 전자조달센터를 지정해RFID 등 8개 시스템의 운영ㆍ유지보수 사업에 대한 전자조달지원센터 지정과 이관을 추진했다. 조달통계에 관한 빅데이터 분석 시스템을 구축해 공공조달업무 효율화를 지원하고, 향상된 보안성으로 빠른 실행을 지원하는 안전입찰 2.0을 도입함으로써 이용자 만족도 및 보안성을 높이고 있다.", "북한 핵전략에 대한 연구는 어떤 효과를 갖는가. 우선 북한의 핵전략을 파악함으로써 북한의 핵위협에 대해 보다 효과적인 군사적 대응이 가능하게 된다. 현재 우리는 북한의 핵전략에 대해 지극히 초보적인 지식만을 갖고 있으며, 따라서 이에 대한 대응책도 유효하거나 충분치 않을 가능성이 높다. 북한의 핵전략을 파악한다는 것은 북한이 핵무기의 수량을 얼마나 증대할 것인지, 핵무기의 종류와 핵무기를 어떤 상황에서 사용할 것인지, 핵무기를 어떤 용도로 사용할 것인지를 이해하는 것이다. 이렇게 북한의 핵전략을 이해할 때, 북한의 핵사용 또는 핵사용 위협을 성공적으로 억제할 가능성도 높아질 것이다. 또한 북한의 핵전략에 대한 이해는 우리의 대북정책 또는 북핵정책에 큰 영향을 미칠 것이다. 사실 현재 북핵에 대한 국내의 논의는 대부분 북핵을 어떻게 정치‧외교적으로 제거할 것인지에 대한 비핵화문제에 집중된다. 학계에서 북한의 핵무기 사용과 사용위협에 대한 군사안보적 대응에 대한 연구와 논의는 거의 전무하거나, 매우 초보적인 단계에 머물고 있다고 해도 과언이 아니다."]}, {"source_sentence": "1960년부터 1970년대 사회주의권은 물론 비사회주의권의 개발도상국을 지원하며 제3세계 리더 역할을 한 국가는 어디니?", "sentences": ["1974년 포르투갈에서부터 시작한 민주화의 제3의 물결은 남유럽과 중남미를 거쳐 아시아, 동유럽, 아프리카 등으로 20여 년 동안 확산되었다. 1980년대 말 냉전의 해체는 이러한 민주화의 물결이 붕괴한 사회주의 국가들에게도 영향을 미쳐 자본주의를 기반으로 한 민주주의와 경쟁할 정치체제는 역사상 더 이상 존재하지 않을 것임을 선포하게 했다. 하지만 새로운 세기에 접어들어 모두를 의아하게 만든 현실은 여전히 지금도 전 세계 절반 이상의 국가들이 민주주의가 아닌 권위주의 체제를 유지하고 있는 것이었다. 권위주의 체제의 붕괴는 당연히 민주주의 체제의 수립으로 이어질 것이라는 낙관적 사고에 커다란 의구심을 던지게 만든 현실이자, 기존 권위주의 체제가 붕괴하고 새로이 등장하는 체제가 또 다른 유형의 권위주의일 수 있음을 깨닫게 해준 현실이었다. 대표적으로 사회주의권 붕괴 이후 동유럽에 등장한 정치체제의 다수는 구 공산당 간부들에 의해 지배되는 새로운 유형의 권위주의 체제이거나 벨라루스, 우즈베키스탄, 아제르바이잔처럼 사회주의 국가 시절보다 더 폭압적인 독재체제였다.", "정부는 성장동력 확충과 사회문제 해결에 필요한 국가 전략기술 분야를 집중적으로 지원하기 위해 「국가전략프로젝트」 사업을 신규로 추진할 계획이다. 동 사업은 「성장동력 분야」와 「삶의 질 및 국민행복 분야」의 9개 프로젝트로 구성된다. 성장동력 분야는 자율주행차 ․ 스마트시티 ․ 가상증강현실 ․ 경량소재 ․ 인공지능 등 5개 프로젝트가, 삶의 질 및 국민행복 분야는 미세먼지 ․ 탄소자원화 ․ 정밀의료 ․ 바이오 신약 등 4개 프로젝트가 포함된다. 미래창조과학부는 국가전략프로젝트 사업의 총사업비를 약 1조 6,000억원으로 예상하고 있다. 2017년 예산안은 300억원이며, 프로젝트별 예산은 7개 부처의 예산안에 편성되어 있다. 9개 프로젝트 중 예비타당성조사가 진행 중인 5개 프로젝트의 예산은 세부시행계획 수립비용으로 편성하였다.", "1960~70년대 중국은 제3세계의 리더로서 특히 아프리카 신생독립국을 포함한 사회주의권은 물론 비사회주의권 개발도상국을 지원했다. 1960년 최초로 기니에 무이자 차관을 제공했으며 1960년대 후반 탄자니아와 잠비아를 연결하는 철로를 건설하는 등 제3세계 원조를 위한 물자와 인력을 제공했다, 쿠웨이트, 사우디아라비아, 아랍에미리트 등의 중동 이슬람 국가들은 1970년대 이후부터 중동 국가 결속을 위한 지역 차원의 지원을 시작했다. 쿠웨이트, 사우디아라비아, 아랍에미리트 등의 중동 이슬람 국가들은 1970년대 이후부터 중동 국가 결속을 위한 지역 차원의 지원을 시작했다. 1961년 쿠웨이트는 아랍경제개발펀드(The Kuwait Fund for Arab Economic Development)를 설립했으며, 1970년 중반 이후 이슬람개발은행(IsDB: Islamic Development Bank)과 아랍경제개발은행(BADEA: Arab Bank for Economic Development in Africa) 등을 운영했다."]}, {"source_sentence": "실제적 발달 수준과 잠재적 발단 수준 사이를 역동적인 공간으로 이야기하는 영역은 뭐야?", "sentences": ["세 번째는 비공식적 및 공식적 지원 관점으로 아동기를 역동적인 관계의 복합체로 인식하며, 역동적인 상호관계는 만족스럽고 성공적인 아동기에 필수요소이다. 이러한 상호관계의 범위는 아동 양육과 보호의 주 제공자인 부모에서부터 아동 권리를 최종적으로 보장하는 역할을 하는 국가에까지 이른다. 아동에게 필수적인 지원과 서비스는 가족의 사회 관계망 및 가족과 지역사회를 통한 비공식적 지원이나 제 3섹터 및 영리 부문 및 국가와 기관들을 통한 공식적 지원으로 전달된다. 비공식적 및 공식적 지원은 아동이 필요로 하고 혜택을 받을 수 있는 지원과 서비스를 가능하게 하는 전달자 역할을 한다. 이러한 ‘사회적 자본’을 지원하는 것이 국가 아동 전략의 핵심 주제이다. 이렇게 다양하고 서로 상호작용하는 지원의 원천으로부터 아동은 앞서 말한 9개의 발달 영역에서 성장하기 위한 도움을 받는다. 모든 아동은 좋은 교육과 양질의 의료 서비스에 대한 접근권 등 기본적인 지원과 서비스를 필요로 한다. 일부 아동은 빈곤이나 장애, 소수 인종 및 문화 집단, 양육과 보호의 필요성, 비행 및 자해 행동 등을 이유로 추가적인 지원과 서비스를 필요로 한다.", "하브루타에 임하는 학생들의 태도는 다양하다. 기본적인 학습에 대한 참여율에 따라 상당한 차이를 보인다. 앞에서 언급한 인재시교에 다다를 때까지 기다려주고 관심가져주며, 칭찬과 극려의 말로 지지 할 수 있어야 한다. 비고츠키(Vygotsky)는 근접 발달영역(the zone of proximal development: ZPD)을“독자적으로 문제를 해결함으로써 결정되는 실제적 발달 수준과 성인의 안내나 보다 능력 있는 또래들과 협동하여 문제를 해결함으로써 결정되는 잠재적 발달 수준 간의 거리”로 규정한다. 근접발달 영역(the zone of proximal development)은 실제적 발달 수준(actualdevelopmental level)과 잠재적 발달수준(potential developmental level)사이를 역동적인 공간으로 이야기 한다. 즉 하브루타는 소속한 학습자(친구) 상호작용을 통하여 잠재적 발달수준(potential developmental level)까지 도달하는 것이다. 이러한 작용에 꼭 필요한 것 중 하나는 학습자가 수업에 임하는 태도이다. 즉 학습자의 동기부여를 어떻게 불러일으킬 수 있느냐가 관권이다.", "KTR이 영국의 CE인증기관인 HPi Verification Service Ltd(이하 HPiVS) 와 협력을 강화하기로 했다. 최형기 KTR 원장과 Mr. Alasdair Lewis Reay HPiVS 원장은 유럽으로 수출하는 압력플랜트 설비, 용접, 산업용 기계류에 대한 CE인증업무 협력을 위해 11월 25일 과천청사 5층 아리랑홀에서 협약을 체결했다. KTR은 국내 압력장비 및 기계류 인증 관련 업계의 인증 수요가 증가함에 따라, 현지 기관과의 업무협력을 강화해 인증사업 체계를 확립하기 위해 협약을 체결했다. 협약 체결 후 HPiVS는 KTR 과천청사 내 주요 시험실을 견학하며 연구원 현황을 파악하고 KTR과의 사업 협력 방안에 대해 논의하는 시간을 가졌다. HPiVS는 유럽위원회로부터 인정받은 영국의 유럽 인증기관으로서 플랜트 압력설비, 산업용 기계류, 레저용 장비, 단순압력장비 4개 제품군의 CE인증 권한을 지니고 있다."]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,461 |
RichardErkhov/4bit_-_Qwen2-VL-2B-Instruct-awq
|
RichardErkhov
| null |
[
"safetensors",
"qwen2_vl",
"arxiv:2308.12966",
"4-bit",
"awq",
"region:us"
] | 2025-01-08T05:30:04Z |
2025-01-08T05:31:36+00:00
| 5 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen2-VL-2B-Instruct - AWQ
- Model creator: https://huggingface.co/4bit/
- Original model: https://huggingface.co/4bit/Qwen2-VL-2B-Instruct/
Original model description:
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- multimodal
base_model: Qwen/Qwen2-VL-2B-Instruct
---
# Qwen2-VL-2B-Instruct
## Introduction
We're excited to unveil **Qwen2-VL**, the latest iteration of our Qwen-VL model, representing nearly a year of innovation.
### What’s New in Qwen2-VL?
#### Key Enhancements:
* **SoTA understanding of images of various resolution & ratio**: Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc.
* **Understanding videos of 20min+**: Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc.
* **Agent that can operate your mobiles, robots, etc.**: with the abilities of complex reasoning and decision making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions.
* **Multilingual Support**: to serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images, including most European languages, Japanese, Korean, Arabic, Vietnamese, etc.
#### Model Architecture Updates:
* **Naive Dynamic Resolution**: Unlike before, Qwen2-VL can handle arbitrary image resolutions, mapping them into a dynamic number of visual tokens, offering a more human-like visual processing experience.
<p align="center">
<img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/qwen2_vl.jpg" width="80%"/>
<p>
* **Multimodal Rotary Position Embedding (M-ROPE)**: Decomposes positional embedding into parts to capture 1D textual, 2D visual, and 3D video positional information, enhancing its multimodal processing capabilities.
<p align="center">
<img src="http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/mrope.png" width="80%"/>
<p>
We have three models with 2, 7 and 72 billion parameters. This repo contains the instruction-tuned 2B Qwen2-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2-vl/) and [GitHub](https://github.com/QwenLM/Qwen2-VL).
## Evaluation
### Image Benchmarks
| Benchmark | InternVL2-2B | MiniCPM-V 2.0 | **Qwen2-VL-2B** |
| :--- | :---: | :---: | :---: |
| DocVQA<sub>test</sub> | 86.9 | - | **90.1** |
| InfoVQA<sub>test</sub> | 58.9 | - | **65.5** |
| ChartQA<sub>test</sub> | **76.2** | - | 73.5 |
| TextVQA<sub>val</sub> | 73.4 | - | **79.7** |
| OCRBench | 781 | 605 | **794** |
| MTVQA | - | - | **20.0** |
| MMMU<sub>val</sub> | 36.3 | 38.2 | **41.1** |
| RealWorldQA | 57.3 | 55.8 | **62.9** |
| MME<sub>sum</sub> | **1876.8** | 1808.6 | 1872.0 |
| MMBench-EN<sub>test</sub> | 73.2 | 69.1 | **74.9** |
| MMBench-CN<sub>test</sub> | 70.9 | 66.5 | **73.5** |
| MMBench-V1.1<sub>test</sub> | 69.6 | 65.8 | **72.2** |
| MMT-Bench<sub>test</sub> | - | - | **54.5** |
| MMStar | **49.8** | 39.1 | 48.0 |
| MMVet<sub>GPT-4-Turbo</sub> | 39.7 | 41.0 | **49.5** |
| HallBench<sub>avg</sub> | 38.0 | 36.1 | **41.7** |
| MathVista<sub>testmini</sub> | **46.0** | 39.8 | 43.0 |
| MathVision | - | - | **12.4** |
### Video Benchmarks
| Benchmark | **Qwen2-VL-2B** |
| :--- | :---: |
| MVBench | **63.2** |
| PerceptionTest<sub>test</sub> | **53.9** |
| EgoSchema<sub>test</sub> | **54.9** |
| Video-MME<sub>wo/w subs</sub> | **55.6**/**60.4** |
## Requirements
The code of Qwen2-VL has been in the latest Hugging face transformers and we advise you to build from source with command `pip install git+https://github.com/huggingface/transformers`, or you might encounter the following error:
```
KeyError: 'qwen2_vl'
```
## Quickstart
We offer a toolkit to help you handle various types of visual input more conveniently. This includes base64, URLs, and interleaved images and videos. You can install it using the following command:
```bash
pip install qwen-vl-utils
```
Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`:
```python
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
# default: Load the model on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-2B-Instruct", torch_dtype="auto", device_map="auto"
)
# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
# model = Qwen2VLForConditionalGeneration.from_pretrained(
# "Qwen/Qwen2-VL-2B-Instruct",
# torch_dtype=torch.bfloat16,
# attn_implementation="flash_attention_2",
# device_map="auto",
# )
# default processer
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct")
# The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage.
# min_pixels = 256*28*28
# max_pixels = 1280*28*28
# processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels)
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
<details>
<summary>Without qwen_vl_utils</summary>
```python
from PIL import Image
import requests
import torch
from torchvision import io
from typing import Dict
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
# Load the model in half-precision on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-2B-Instruct", torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct")
# Image
url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
conversation = [
{
"role": "user",
"content": [
{
"type": "image",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preprocess the inputs
text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
# Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe this image.<|im_end|>\n<|im_start|>assistant\n'
inputs = processor(
text=[text_prompt], images=[image], padding=True, return_tensors="pt"
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
output_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids = [
output_ids[len(input_ids) :]
for input_ids, output_ids in zip(inputs.input_ids, output_ids)
]
output_text = processor.batch_decode(
generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
)
print(output_text)
```
</details>
<details>
<summary>Multi image inference</summary>
```python
# Messages containing multiple images and a text query
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "Identify the similarities between these images."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
<details>
<summary>Video inference</summary>
```python
# Messages containing a images list as a video and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": [
"file:///path/to/frame1.jpg",
"file:///path/to/frame2.jpg",
"file:///path/to/frame3.jpg",
"file:///path/to/frame4.jpg",
],
"fps": 1.0,
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Messages containing a video and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": "file:///path/to/video1.mp4",
"max_pixels": 360 * 420,
"fps": 1.0,
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
<details>
<summary>Batch inference</summary>
```python
# Sample messages for batch inference
messages1 = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "What are the common elements in these pictures?"},
],
}
]
messages2 = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who are you?"},
]
# Combine messages for batch processing
messages = [messages1, messages1]
# Preparation for batch inference
texts = [
processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)
for msg in messages
]
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=texts,
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Batch Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_texts = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_texts)
```
</details>
### More Usage Tips
For input images, we support local files, base64, and URLs. For videos, we currently only support local files.
```python
# You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text.
## Local file path
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Image URL
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "http://path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Base64 encoded image
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "data:image;base64,/9j/..."},
{"type": "text", "text": "Describe this image."},
],
}
]
```
#### Image Resolution for performance boost
The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage.
```python
min_pixels = 256 * 28 * 28
max_pixels = 1280 * 28 * 28
processor = AutoProcessor.from_pretrained(
"Qwen/Qwen2-VL-2B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels
)
```
Besides, We provide two methods for fine-grained control over the image size input to the model:
1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels.
2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28.
```python
# min_pixels and max_pixels
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"resized_height": 280,
"resized_width": 420,
},
{"type": "text", "text": "Describe this image."},
],
}
]
# resized_height and resized_width
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"min_pixels": 50176,
"max_pixels": 50176,
},
{"type": "text", "text": "Describe this image."},
],
}
]
```
## Limitations
While Qwen2-VL are applicable to a wide range of visual tasks, it is equally important to understand its limitations. Here are some known restrictions:
1. Lack of Audio Support: The current model does **not comprehend audio information** within videos.
2. Data timeliness: Our image dataset is **updated until June 2023**, and information subsequent to this date may not be covered.
3. Constraints in Individuals and Intellectual Property (IP): The model's capacity to recognize specific individuals or IPs is limited, potentially failing to comprehensively cover all well-known personalities or brands.
4. Limited Capacity for Complex Instruction: When faced with intricate multi-step instructions, the model's understanding and execution capabilities require enhancement.
5. Insufficient Counting Accuracy: Particularly in complex scenes, the accuracy of object counting is not high, necessitating further improvements.
6. Weak Spatial Reasoning Skills: Especially in 3D spaces, the model's inference of object positional relationships is inadequate, making it difficult to precisely judge the relative positions of objects.
These limitations serve as ongoing directions for model optimization and improvement, and we are committed to continually enhancing the model's performance and scope of application.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{Qwen2-VL,
title={Qwen2-VL},
author={Qwen team},
year={2024}
}
@article{Qwen-VL,
title={Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond},
author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren},
journal={arXiv preprint arXiv:2308.12966},
year={2023}
}
```
| null |
Non_BioNLP
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen2-VL-2B-Instruct - AWQ
- Model creator: https://huggingface.co/4bit/
- Original model: https://huggingface.co/4bit/Qwen2-VL-2B-Instruct/
Original model description:
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- multimodal
base_model: Qwen/Qwen2-VL-2B-Instruct
---
# Qwen2-VL-2B-Instruct
## Introduction
We're excited to unveil **Qwen2-VL**, the latest iteration of our Qwen-VL model, representing nearly a year of innovation.
### What’s New in Qwen2-VL?
#### Key Enhancements:
* **SoTA understanding of images of various resolution & ratio**: Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc.
* **Understanding videos of 20min+**: Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc.
* **Agent that can operate your mobiles, robots, etc.**: with the abilities of complex reasoning and decision making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions.
* **Multilingual Support**: to serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images, including most European languages, Japanese, Korean, Arabic, Vietnamese, etc.
#### Model Architecture Updates:
* **Naive Dynamic Resolution**: Unlike before, Qwen2-VL can handle arbitrary image resolutions, mapping them into a dynamic number of visual tokens, offering a more human-like visual processing experience.
<p align="center">
<img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/qwen2_vl.jpg" width="80%"/>
<p>
* **Multimodal Rotary Position Embedding (M-ROPE)**: Decomposes positional embedding into parts to capture 1D textual, 2D visual, and 3D video positional information, enhancing its multimodal processing capabilities.
<p align="center">
<img src="http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/mrope.png" width="80%"/>
<p>
We have three models with 2, 7 and 72 billion parameters. This repo contains the instruction-tuned 2B Qwen2-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2-vl/) and [GitHub](https://github.com/QwenLM/Qwen2-VL).
## Evaluation
### Image Benchmarks
| Benchmark | InternVL2-2B | MiniCPM-V 2.0 | **Qwen2-VL-2B** |
| :--- | :---: | :---: | :---: |
| DocVQA<sub>test</sub> | 86.9 | - | **90.1** |
| InfoVQA<sub>test</sub> | 58.9 | - | **65.5** |
| ChartQA<sub>test</sub> | **76.2** | - | 73.5 |
| TextVQA<sub>val</sub> | 73.4 | - | **79.7** |
| OCRBench | 781 | 605 | **794** |
| MTVQA | - | - | **20.0** |
| MMMU<sub>val</sub> | 36.3 | 38.2 | **41.1** |
| RealWorldQA | 57.3 | 55.8 | **62.9** |
| MME<sub>sum</sub> | **1876.8** | 1808.6 | 1872.0 |
| MMBench-EN<sub>test</sub> | 73.2 | 69.1 | **74.9** |
| MMBench-CN<sub>test</sub> | 70.9 | 66.5 | **73.5** |
| MMBench-V1.1<sub>test</sub> | 69.6 | 65.8 | **72.2** |
| MMT-Bench<sub>test</sub> | - | - | **54.5** |
| MMStar | **49.8** | 39.1 | 48.0 |
| MMVet<sub>GPT-4-Turbo</sub> | 39.7 | 41.0 | **49.5** |
| HallBench<sub>avg</sub> | 38.0 | 36.1 | **41.7** |
| MathVista<sub>testmini</sub> | **46.0** | 39.8 | 43.0 |
| MathVision | - | - | **12.4** |
### Video Benchmarks
| Benchmark | **Qwen2-VL-2B** |
| :--- | :---: |
| MVBench | **63.2** |
| PerceptionTest<sub>test</sub> | **53.9** |
| EgoSchema<sub>test</sub> | **54.9** |
| Video-MME<sub>wo/w subs</sub> | **55.6**/**60.4** |
## Requirements
The code of Qwen2-VL has been in the latest Hugging face transformers and we advise you to build from source with command `pip install git+https://github.com/huggingface/transformers`, or you might encounter the following error:
```
KeyError: 'qwen2_vl'
```
## Quickstart
We offer a toolkit to help you handle various types of visual input more conveniently. This includes base64, URLs, and interleaved images and videos. You can install it using the following command:
```bash
pip install qwen-vl-utils
```
Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`:
```python
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
# default: Load the model on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-2B-Instruct", torch_dtype="auto", device_map="auto"
)
# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
# model = Qwen2VLForConditionalGeneration.from_pretrained(
# "Qwen/Qwen2-VL-2B-Instruct",
# torch_dtype=torch.bfloat16,
# attn_implementation="flash_attention_2",
# device_map="auto",
# )
# default processer
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct")
# The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage.
# min_pixels = 256*28*28
# max_pixels = 1280*28*28
# processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels)
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
<details>
<summary>Without qwen_vl_utils</summary>
```python
from PIL import Image
import requests
import torch
from torchvision import io
from typing import Dict
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
# Load the model in half-precision on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-2B-Instruct", torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct")
# Image
url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
conversation = [
{
"role": "user",
"content": [
{
"type": "image",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preprocess the inputs
text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
# Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe this image.<|im_end|>\n<|im_start|>assistant\n'
inputs = processor(
text=[text_prompt], images=[image], padding=True, return_tensors="pt"
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
output_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids = [
output_ids[len(input_ids) :]
for input_ids, output_ids in zip(inputs.input_ids, output_ids)
]
output_text = processor.batch_decode(
generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
)
print(output_text)
```
</details>
<details>
<summary>Multi image inference</summary>
```python
# Messages containing multiple images and a text query
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "Identify the similarities between these images."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
<details>
<summary>Video inference</summary>
```python
# Messages containing a images list as a video and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": [
"file:///path/to/frame1.jpg",
"file:///path/to/frame2.jpg",
"file:///path/to/frame3.jpg",
"file:///path/to/frame4.jpg",
],
"fps": 1.0,
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Messages containing a video and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": "file:///path/to/video1.mp4",
"max_pixels": 360 * 420,
"fps": 1.0,
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
<details>
<summary>Batch inference</summary>
```python
# Sample messages for batch inference
messages1 = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "What are the common elements in these pictures?"},
],
}
]
messages2 = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who are you?"},
]
# Combine messages for batch processing
messages = [messages1, messages1]
# Preparation for batch inference
texts = [
processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)
for msg in messages
]
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=texts,
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Batch Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_texts = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_texts)
```
</details>
### More Usage Tips
For input images, we support local files, base64, and URLs. For videos, we currently only support local files.
```python
# You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text.
## Local file path
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Image URL
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "http://path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Base64 encoded image
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "data:image;base64,/9j/..."},
{"type": "text", "text": "Describe this image."},
],
}
]
```
#### Image Resolution for performance boost
The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage.
```python
min_pixels = 256 * 28 * 28
max_pixels = 1280 * 28 * 28
processor = AutoProcessor.from_pretrained(
"Qwen/Qwen2-VL-2B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels
)
```
Besides, We provide two methods for fine-grained control over the image size input to the model:
1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels.
2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28.
```python
# min_pixels and max_pixels
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"resized_height": 280,
"resized_width": 420,
},
{"type": "text", "text": "Describe this image."},
],
}
]
# resized_height and resized_width
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"min_pixels": 50176,
"max_pixels": 50176,
},
{"type": "text", "text": "Describe this image."},
],
}
]
```
## Limitations
While Qwen2-VL are applicable to a wide range of visual tasks, it is equally important to understand its limitations. Here are some known restrictions:
1. Lack of Audio Support: The current model does **not comprehend audio information** within videos.
2. Data timeliness: Our image dataset is **updated until June 2023**, and information subsequent to this date may not be covered.
3. Constraints in Individuals and Intellectual Property (IP): The model's capacity to recognize specific individuals or IPs is limited, potentially failing to comprehensively cover all well-known personalities or brands.
4. Limited Capacity for Complex Instruction: When faced with intricate multi-step instructions, the model's understanding and execution capabilities require enhancement.
5. Insufficient Counting Accuracy: Particularly in complex scenes, the accuracy of object counting is not high, necessitating further improvements.
6. Weak Spatial Reasoning Skills: Especially in 3D spaces, the model's inference of object positional relationships is inadequate, making it difficult to precisely judge the relative positions of objects.
These limitations serve as ongoing directions for model optimization and improvement, and we are committed to continually enhancing the model's performance and scope of application.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{Qwen2-VL,
title={Qwen2-VL},
author={Qwen team},
year={2024}
}
@article{Qwen-VL,
title={Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond},
author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren},
journal={arXiv preprint arXiv:2308.12966},
year={2023}
}
```
|
{}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,462 |
LoneStriker/zephyr-7b-gemma-v0.1-6.0bpw-h6-exl2
|
LoneStriker
|
text-generation
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:argilla/dpo-mix-7k",
"arxiv:2310.16944",
"base_model:HuggingFaceH4/zephyr-7b-gemma-sft-v0.1",
"base_model:finetune:HuggingFaceH4/zephyr-7b-gemma-sft-v0.1",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-03-02T04:01:15Z |
2024-03-02T04:04:37+00:00
| 9 | 0 |
---
base_model: HuggingFaceH4/zephyr-7b-gemma-sft-v0.1
datasets:
- argilla/dpo-mix-7k
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
pipeline_tag: text-generation
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
model-index:
- name: zephyr-7b-gemma
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: MT-Bench
type: unknown
metrics:
- type: unknown
value: 7.81
name: score
source:
url: https://huggingface.co/spaces/lmsys/mt-bench
---
<img src="https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1/resolve/main/thumbnail.png" alt="Zephyr 7B Gemma Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Model Card for Zephyr 7B Gemma
Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr 7B Gemma is the third model in the series, and is a fine-tuned version of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) that was trained on on a mix of publicly available, synthetic datasets using Direct Preference Optimization (DPO). You can reproduce the training of this model via the recipe provided in the [Alignment Handbook](https://github.com/huggingface/alignment-handbook).
## Model description
- **Model type:** A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
- **Language(s) (NLP):** Primarily English
- **License:** Gemma Terms of Use
- **Finetuned from model:** [google/gemma-7b](https://huggingface.co/google/gemma-7b)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/alignment-handbook
- **Demo:** https://huggingface.co/spaces/HuggingFaceH4/zephyr-7b-gemma-chat
## Performance
| Model |MT Bench⬇️|IFEval|
|-----------------------------------------------------------------------|------:|------:|
|[zephyr-7b-gemma-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1)| 7.81 | 28.76|
|[zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) | 7.34 | 43.81|
|[google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it) | 6.38 | 38.01|
| Model |AGIEval|GPT4All|TruthfulQA|BigBench|Average ⬇️|
|-----------------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) | 37.52| 71.77| 55.26| 39.77| 51.08|
|[zephyr-7b-gemma-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1)| 34.22| 66.37| 52.19| 37.10| 47.47|
|[mlabonne/Gemmalpaca-7B](https://huggingface.co/mlabonne/Gemmalpaca-7B)| 21.6 | 40.87| 44.85 | 30.49| 34.45|
|[google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it) | 21.33| 40.84| 41.70| 30.25| 33.53|
<details><summary>Details of AGIEval, GPT4All, TruthfulQA, BigBench </summary>
### AGIEval
| Task |Version| Metric |Value| |Stderr|
|------------------------------|------:|--------|----:|---|-----:|
|agieval_aqua_rat | 0|acc |21.65|± | 2.59|
| | |acc_norm|25.20|± | 2.73|
|agieval_logiqa_en | 0|acc |34.72|± | 1.87|
| | |acc_norm|35.94|± | 1.88|
|agieval_lsat_ar | 0|acc |19.57|± | 2.62|
| | |acc_norm|21.74|± | 2.73|
|agieval_lsat_lr | 0|acc |30.59|± | 2.04|
| | |acc_norm|32.55|± | 2.08|
|agieval_lsat_rc | 0|acc |49.07|± | 3.05|
| | |acc_norm|42.75|± | 3.02|
|agieval_sat_en | 0|acc |54.85|± | 3.48|
| | |acc_norm|53.40|± | 3.48|
|agieval_sat_en_without_passage| 0|acc |37.38|± | 3.38|
| | |acc_norm|33.98|± | 3.31|
|agieval_sat_math | 0|acc |30.91|± | 3.12|
| | |acc_norm|28.18|± | 3.04|
Average: 34.22%
### GPT4All
| Task |Version| Metric |Value| |Stderr|
|-------------|------:|--------|----:|---|-----:|
|arc_challenge| 0|acc |49.15|± | 1.46|
| | |acc_norm|52.47|± | 1.46|
|arc_easy | 0|acc |77.44|± | 0.86|
| | |acc_norm|74.75|± | 0.89|
|boolq | 1|acc |79.69|± | 0.70|
|hellaswag | 0|acc |60.59|± | 0.49|
| | |acc_norm|78.00|± | 0.41|
|openbookqa | 0|acc |29.20|± | 2.04|
| | |acc_norm|37.80|± | 2.17|
|piqa | 0|acc |76.82|± | 0.98|
| | |acc_norm|77.80|± | 0.97|
|winogrande | 0|acc |64.09|± | 1.35|
Average: 66.37%
### TruthfulQA
| Task |Version|Metric|Value| |Stderr|
|-------------|------:|------|----:|---|-----:|
|truthfulqa_mc| 1|mc1 |35.74|± | 1.68|
| | |mc2 |52.19|± | 1.59|
Average: 52.19%
### Bigbench
| Task |Version| Metric |Value| |Stderr|
|------------------------------------------------|------:|---------------------|----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|53.68|± | 3.63|
|bigbench_date_understanding | 0|multiple_choice_grade|59.89|± | 2.55|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|30.23|± | 2.86|
|bigbench_geometric_shapes | 0|multiple_choice_grade|11.42|± | 1.68|
| | |exact_str_match | 0.00|± | 0.00|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|28.40|± | 2.02|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|19.14|± | 1.49|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|44.67|± | 2.88|
|bigbench_movie_recommendation | 0|multiple_choice_grade|26.80|± | 1.98|
|bigbench_navigate | 0|multiple_choice_grade|50.00|± | 1.58|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|52.75|± | 1.12|
|bigbench_ruin_names | 0|multiple_choice_grade|33.04|± | 2.22|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|33.37|± | 1.49|
|bigbench_snarks | 0|multiple_choice_grade|48.62|± | 3.73|
|bigbench_sports_understanding | 0|multiple_choice_grade|58.11|± | 1.57|
|bigbench_temporal_sequences | 0|multiple_choice_grade|37.20|± | 1.53|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|20.08|± | 1.13|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|15.77|± | 0.87|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|44.67|± | 2.88|
Average: 37.1%
</details>
## Intended uses & limitations
The model was initially fine-tuned on the [DEITA 10K](https://huggingface.co/datasets/HuggingFaceH4/deita-10k-v0-sft) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT.
We then further aligned the model with [🤗 TRL's](https://github.com/huggingface/trl) `DPOTrainer` on the [argilla/dpo-mix-7k](https://huggingface.co/datasets/argilla/dpo-mix-7k) dataset, which contains 7k prompts and model completions that are ranked by GPT-4. As a result, the model can be used for chat and you can check out our [demo](https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat) to test its capabilities.
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
```python
# pip install transformers>=4.38.2
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="HuggingFaceH4/zephyr-7b-gemma-v0.1",
device_map="auto",
torch_dtype=torch.bfloat16,
)
messages = [
{
"role": "system",
"content": "", # Model not yet trained for follow this
},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
outputs = pipe(
messages,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_k=50,
top_p=0.95,
stop_sequence="<|im_end|>",
)
print(outputs[0]["generated_text"][-1]["content"])
# It is not possible for a human to eat a helicopter in one sitting, as a
# helicopter is a large and inedible machine. Helicopters are made of metal,
# plastic, and other materials that are not meant to be consumed by humans.
# Eating a helicopter would be extremely dangerous and would likely cause
# serious health problems, including choking, suffocation, and poisoning. It is
# important to only eat food that is safe and intended for human consumption.
```
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Zephyr 7B Gemma has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model (`google/gemma-7b`), however it is likely to have included a mix of Web data and technical sources like books and code. See the [StarCoder2 model card](https://huggingface.co/bigcode/starcoder2-15b) for an example of this.
## Training and evaluation data
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-gemma-sft-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-sft-v0.1) on the argilla/dpo-mix-7k dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4695
- Rewards/chosen: -3.3746
- Rewards/rejected: -4.9715
- Rewards/accuracies: 0.7188
- Rewards/margins: 1.5970
- Logps/rejected: -459.4853
- Logps/chosen: -429.9115
- Logits/rejected: 86.4684
- Logits/chosen: 92.8200
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.1923 | 1.9 | 100 | 0.4736 | -3.4575 | -4.9556 | 0.75 | 1.4980 | -459.1662 | -431.5707 | 86.3863 | 92.7360 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.1
## Citation Information
If you find this model useful in your work, please consider citing the Zephyr technical report:
```
@misc{tunstall2023zephyr,
title={Zephyr: Direct Distillation of LM Alignment},
author={Lewis Tunstall and Edward Beeching and Nathan Lambert and Nazneen Rajani and Kashif Rasul and Younes Belkada and Shengyi Huang and Leandro von Werra and Clémentine Fourrier and Nathan Habib and Nathan Sarrazin and Omar Sanseviero and Alexander M. Rush and Thomas Wolf},
year={2023},
eprint={2310.16944},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
You may also wish to cite the creators of this model as well:
```
@misc{zephyr_7b_gemma,
author = {Lewis Tunstall and Philipp Schmid},
title = {Zephyr 7B Gemma},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1}}
}
```
| null |
Non_BioNLP
|
<img src="https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1/resolve/main/thumbnail.png" alt="Zephyr 7B Gemma Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Model Card for Zephyr 7B Gemma
Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr 7B Gemma is the third model in the series, and is a fine-tuned version of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) that was trained on on a mix of publicly available, synthetic datasets using Direct Preference Optimization (DPO). You can reproduce the training of this model via the recipe provided in the [Alignment Handbook](https://github.com/huggingface/alignment-handbook).
## Model description
- **Model type:** A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
- **Language(s) (NLP):** Primarily English
- **License:** Gemma Terms of Use
- **Finetuned from model:** [google/gemma-7b](https://huggingface.co/google/gemma-7b)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/alignment-handbook
- **Demo:** https://huggingface.co/spaces/HuggingFaceH4/zephyr-7b-gemma-chat
## Performance
| Model |MT Bench⬇️|IFEval|
|-----------------------------------------------------------------------|------:|------:|
|[zephyr-7b-gemma-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1)| 7.81 | 28.76|
|[zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) | 7.34 | 43.81|
|[google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it) | 6.38 | 38.01|
| Model |AGIEval|GPT4All|TruthfulQA|BigBench|Average ⬇️|
|-----------------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) | 37.52| 71.77| 55.26| 39.77| 51.08|
|[zephyr-7b-gemma-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1)| 34.22| 66.37| 52.19| 37.10| 47.47|
|[mlabonne/Gemmalpaca-7B](https://huggingface.co/mlabonne/Gemmalpaca-7B)| 21.6 | 40.87| 44.85 | 30.49| 34.45|
|[google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it) | 21.33| 40.84| 41.70| 30.25| 33.53|
<details><summary>Details of AGIEval, GPT4All, TruthfulQA, BigBench </summary>
### AGIEval
| Task |Version| Metric |Value| |Stderr|
|------------------------------|------:|--------|----:|---|-----:|
|agieval_aqua_rat | 0|acc |21.65|± | 2.59|
| | |acc_norm|25.20|± | 2.73|
|agieval_logiqa_en | 0|acc |34.72|± | 1.87|
| | |acc_norm|35.94|± | 1.88|
|agieval_lsat_ar | 0|acc |19.57|± | 2.62|
| | |acc_norm|21.74|± | 2.73|
|agieval_lsat_lr | 0|acc |30.59|± | 2.04|
| | |acc_norm|32.55|± | 2.08|
|agieval_lsat_rc | 0|acc |49.07|± | 3.05|
| | |acc_norm|42.75|± | 3.02|
|agieval_sat_en | 0|acc |54.85|± | 3.48|
| | |acc_norm|53.40|± | 3.48|
|agieval_sat_en_without_passage| 0|acc |37.38|± | 3.38|
| | |acc_norm|33.98|± | 3.31|
|agieval_sat_math | 0|acc |30.91|± | 3.12|
| | |acc_norm|28.18|± | 3.04|
Average: 34.22%
### GPT4All
| Task |Version| Metric |Value| |Stderr|
|-------------|------:|--------|----:|---|-----:|
|arc_challenge| 0|acc |49.15|± | 1.46|
| | |acc_norm|52.47|± | 1.46|
|arc_easy | 0|acc |77.44|± | 0.86|
| | |acc_norm|74.75|± | 0.89|
|boolq | 1|acc |79.69|± | 0.70|
|hellaswag | 0|acc |60.59|± | 0.49|
| | |acc_norm|78.00|± | 0.41|
|openbookqa | 0|acc |29.20|± | 2.04|
| | |acc_norm|37.80|± | 2.17|
|piqa | 0|acc |76.82|± | 0.98|
| | |acc_norm|77.80|± | 0.97|
|winogrande | 0|acc |64.09|± | 1.35|
Average: 66.37%
### TruthfulQA
| Task |Version|Metric|Value| |Stderr|
|-------------|------:|------|----:|---|-----:|
|truthfulqa_mc| 1|mc1 |35.74|± | 1.68|
| | |mc2 |52.19|± | 1.59|
Average: 52.19%
### Bigbench
| Task |Version| Metric |Value| |Stderr|
|------------------------------------------------|------:|---------------------|----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|53.68|± | 3.63|
|bigbench_date_understanding | 0|multiple_choice_grade|59.89|± | 2.55|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|30.23|± | 2.86|
|bigbench_geometric_shapes | 0|multiple_choice_grade|11.42|± | 1.68|
| | |exact_str_match | 0.00|± | 0.00|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|28.40|± | 2.02|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|19.14|± | 1.49|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|44.67|± | 2.88|
|bigbench_movie_recommendation | 0|multiple_choice_grade|26.80|± | 1.98|
|bigbench_navigate | 0|multiple_choice_grade|50.00|± | 1.58|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|52.75|± | 1.12|
|bigbench_ruin_names | 0|multiple_choice_grade|33.04|± | 2.22|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|33.37|± | 1.49|
|bigbench_snarks | 0|multiple_choice_grade|48.62|± | 3.73|
|bigbench_sports_understanding | 0|multiple_choice_grade|58.11|± | 1.57|
|bigbench_temporal_sequences | 0|multiple_choice_grade|37.20|± | 1.53|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|20.08|± | 1.13|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|15.77|± | 0.87|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|44.67|± | 2.88|
Average: 37.1%
</details>
## Intended uses & limitations
The model was initially fine-tuned on the [DEITA 10K](https://huggingface.co/datasets/HuggingFaceH4/deita-10k-v0-sft) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT.
We then further aligned the model with [🤗 TRL's](https://github.com/huggingface/trl) `DPOTrainer` on the [argilla/dpo-mix-7k](https://huggingface.co/datasets/argilla/dpo-mix-7k) dataset, which contains 7k prompts and model completions that are ranked by GPT-4. As a result, the model can be used for chat and you can check out our [demo](https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat) to test its capabilities.
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
```python
# pip install transformers>=4.38.2
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="HuggingFaceH4/zephyr-7b-gemma-v0.1",
device_map="auto",
torch_dtype=torch.bfloat16,
)
messages = [
{
"role": "system",
"content": "", # Model not yet trained for follow this
},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
outputs = pipe(
messages,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_k=50,
top_p=0.95,
stop_sequence="<|im_end|>",
)
print(outputs[0]["generated_text"][-1]["content"])
# It is not possible for a human to eat a helicopter in one sitting, as a
# helicopter is a large and inedible machine. Helicopters are made of metal,
# plastic, and other materials that are not meant to be consumed by humans.
# Eating a helicopter would be extremely dangerous and would likely cause
# serious health problems, including choking, suffocation, and poisoning. It is
# important to only eat food that is safe and intended for human consumption.
```
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Zephyr 7B Gemma has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model (`google/gemma-7b`), however it is likely to have included a mix of Web data and technical sources like books and code. See the [StarCoder2 model card](https://huggingface.co/bigcode/starcoder2-15b) for an example of this.
## Training and evaluation data
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-gemma-sft-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-sft-v0.1) on the argilla/dpo-mix-7k dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4695
- Rewards/chosen: -3.3746
- Rewards/rejected: -4.9715
- Rewards/accuracies: 0.7188
- Rewards/margins: 1.5970
- Logps/rejected: -459.4853
- Logps/chosen: -429.9115
- Logits/rejected: 86.4684
- Logits/chosen: 92.8200
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.1923 | 1.9 | 100 | 0.4736 | -3.4575 | -4.9556 | 0.75 | 1.4980 | -459.1662 | -431.5707 | 86.3863 | 92.7360 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.1
## Citation Information
If you find this model useful in your work, please consider citing the Zephyr technical report:
```
@misc{tunstall2023zephyr,
title={Zephyr: Direct Distillation of LM Alignment},
author={Lewis Tunstall and Edward Beeching and Nathan Lambert and Nazneen Rajani and Kashif Rasul and Younes Belkada and Shengyi Huang and Leandro von Werra and Clémentine Fourrier and Nathan Habib and Nathan Sarrazin and Omar Sanseviero and Alexander M. Rush and Thomas Wolf},
year={2023},
eprint={2310.16944},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
You may also wish to cite the creators of this model as well:
```
@misc{zephyr_7b_gemma,
author = {Lewis Tunstall and Philipp Schmid},
title = {Zephyr 7B Gemma},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1}}
}
```
|
{"base_model": "HuggingFaceH4/zephyr-7b-gemma-sft-v0.1", "datasets": ["argilla/dpo-mix-7k"], "license": "other", "license_name": "gemma-terms-of-use", "license_link": "https://ai.google.dev/gemma/terms", "pipeline_tag": "text-generation", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer"], "model-index": [{"name": "zephyr-7b-gemma", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MT-Bench", "type": "unknown"}, "metrics": [{"type": "unknown", "value": 7.81, "name": "score"}], "source": {"url": "https://huggingface.co/spaces/lmsys/mt-bench"}}]}]}
|
task
|
[
"TRANSLATION"
] | 42,463 |
LoneStriker/Orca-2-7b-6.0bpw-h6-exl2
|
LoneStriker
|
text-generation
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"orca",
"orca2",
"microsoft",
"arxiv:2311.11045",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-11-21T04:54:20Z |
2023-11-21T04:57:34+00:00
| 7 | 0 |
---
pipeline_tag: text-generation
tags:
- orca
- orca2
- microsoft
---
# Orca 2
<!-- Provide a quick summary of what the model is/does. -->
Orca 2 is a helpful assistant that is built for research purposes only and provides a single turn response
in tasks such as reasoning over user given data, reading comprehension, math problem solving and text summarization.
The model is designed to excel particularly in reasoning.
We open-source Orca 2 to encourage further research on the development, evaluation, and alignment of smaller LMs.
## What is Orca 2’s intended use(s)?
+ Orca 2 is built for research purposes only.
+ The main purpose is to allow the research community to assess its abilities and to provide a foundation for building better frontier models.
## How was Orca 2 evaluated?
+ Orca 2 has been evaluated on a large number of tasks ranging from reasoning to grounding and safety. Please refer
to Section 6 and Appendix in the [Orca 2 paper](https://arxiv.org/pdf/2311.11045.pdf) for details on evaluations.
## Model Details
Orca 2 is a finetuned version of LLAMA-2. Orca 2’s training data is a synthetic dataset that was created to enhance the small model’s reasoning abilities.
All synthetic training data was moderated using the Microsoft Azure content filters. More details about the model can be found in the [Orca 2 paper](https://arxiv.org/pdf/2311.11045.pdf).
Please refer to LLaMA-2 technical report for details on the model architecture.
## License
Orca 2 is licensed under the [Microsoft Research License](LICENSE).
Llama 2 is licensed under the [LLAMA 2 Community License](https://ai.meta.com/llama/license/), Copyright © Meta Platforms, Inc. All Rights Reserved.
## Bias, Risks, and Limitations
Orca 2, built upon the LLaMA 2 model family, retains many of its limitations, as well as the
common limitations of other large language models or limitation caused by its training
process, including:
**Data Biases**: Large language models, trained on extensive data, can inadvertently carry
biases present in the source data. Consequently, the models may generate outputs that could
be potentially biased or unfair.
**Lack of Contextual Understanding**: Despite their impressive capabilities in language understanding and generation, these models exhibit limited real-world understanding, resulting
in potential inaccuracies or nonsensical responses.
**Lack of Transparency**: Due to the complexity and size, large language models can act
as “black boxes”, making it difficult to comprehend the rationale behind specific outputs or
decisions. We recommend reviewing transparency notes from Azure for more information.
**Content Harms**: There are various types of content harms that large language models
can cause. It is important to be aware of them when using these models, and to take
actions to prevent them. It is recommended to leverage various content moderation services
provided by different companies and institutions. On an important note, we hope for better
regulations and standards from government and technology leaders around content harms
for AI technologies in future. We value and acknowledge the important role that research
and open source community can play in this direction.
**Hallucination**: It is important to be aware and cautious not to entirely rely on a given
language model for critical decisions or information that might have deep impact as it is
not obvious how to prevent these models from fabricating content. Moreover, it is not clear
whether small models may be more susceptible to hallucination in ungrounded generation
use cases due to their smaller sizes and hence reduced memorization capacities. This is an
active research topic and we hope there will be more rigorous measurement, understanding
and mitigations around this topic.
**Potential for Misuse**: Without suitable safeguards, there is a risk that these models could
be maliciously used for generating disinformation or harmful content.
**Data Distribution**: Orca 2’s performance is likely to correlate strongly with the distribution
of the tuning data. This correlation might limit its accuracy in areas underrepresented in
the training dataset such as math, coding, and reasoning.
**System messages**: Orca 2 demonstrates variance in performance depending on the system
instructions. Additionally, the stochasticity introduced by the model size may lead to
generation of non-deterministic responses to different system instructions.
**Zero-Shot Settings**: Orca 2 was trained on data that mostly simulate zero-shot settings.
While the model demonstrate very strong performance in zero-shot settings, it does not show
the same gains of using few-shot learning compared to other, specially larger, models.
**Synthetic data**: As Orca 2 is trained on synthetic data, it could inherit both the advantages
and shortcomings of the models and methods used for data generation. We posit that Orca
2 benefits from the safety measures incorporated during training and safety guardrails (e.g.,
content filter) within the Azure OpenAI API. However, detailed studies are required for
better quantification of such risks.
This model is solely designed for research settings, and its testing has only been carried
out in such environments. It should not be used in downstream applications, as additional
analysis is needed to assess potential harm or bias in the proposed application.
## Getting started with Orca 2
**Inference with Hugging Face library**
```python
import torch
import transformers
if torch.cuda.is_available():
torch.set_default_device("cuda")
else:
torch.set_default_device("cpu")
model = transformers.AutoModelForCausalLM.from_pretrained("microsoft/Orca-2-7b", device_map='auto')
# https://github.com/huggingface/transformers/issues/27132
# please use the slow tokenizer since fast and slow tokenizer produces different tokens
tokenizer = transformers.AutoTokenizer.from_pretrained(
"microsoft/Orca-2-7b",
use_fast=False,
)
system_message = "You are Orca, an AI language model created by Microsoft. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior."
user_message = "How can you determine if a restaurant is popular among locals or mainly attracts tourists, and why might this information be useful?"
prompt = f"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{user_message}<|im_end|>\n<|im_start|>assistant"
inputs = tokenizer(prompt, return_tensors='pt')
output_ids = model.generate(inputs["input_ids"],)
answer = tokenizer.batch_decode(output_ids)[0]
print(answer)
# This example continues showing how to add a second turn message by the user to the conversation
second_turn_user_message = "Give me a list of the key points of your first answer."
# we set add_special_tokens=False because we dont want to automatically add a bos_token between messages
second_turn_message_in_markup = f"\n<|im_start|>user\n{second_turn_user_message}<|im_end|>\n<|im_start|>assistant"
second_turn_tokens = tokenizer(second_turn_message_in_markup, return_tensors='pt', add_special_tokens=False)
second_turn_input = torch.cat([output_ids, second_turn_tokens['input_ids']], dim=1)
output_ids_2 = model.generate(second_turn_input,)
second_turn_answer = tokenizer.batch_decode(output_ids_2)[0]
print(second_turn_answer)
```
**Safe inference with Azure AI Content Safety**
The usage of [Azure AI Content Safety](https://azure.microsoft.com/en-us/products/ai-services/ai-content-safety/) on top of model prediction is strongly encouraged
and can help preventing some of content harms. Azure AI Content Safety is a content moderation platform
that uses AI to moderate content. By having Azure AI Content Safety on the output of Orca 2,
the model output can be moderated by scanning it for different harm categories including sexual content, violence, hate, and
self-harm with multiple severity levels and multi-lingual detection.
```python
import os
import math
import transformers
import torch
from azure.ai.contentsafety import ContentSafetyClient
from azure.core.credentials import AzureKeyCredential
from azure.core.exceptions import HttpResponseError
from azure.ai.contentsafety.models import AnalyzeTextOptions
CONTENT_SAFETY_KEY = os.environ["CONTENT_SAFETY_KEY"]
CONTENT_SAFETY_ENDPOINT = os.environ["CONTENT_SAFETY_ENDPOINT"]
# We use Azure AI Content Safety to filter out any content that reaches "Medium" threshold
# For more information: https://learn.microsoft.com/en-us/azure/ai-services/content-safety/
def should_filter_out(input_text, threshold=4):
# Create an Content Safety client
client = ContentSafetyClient(CONTENT_SAFETY_ENDPOINT, AzureKeyCredential(CONTENT_SAFETY_KEY))
# Construct a request
request = AnalyzeTextOptions(text=input_text)
# Analyze text
try:
response = client.analyze_text(request)
except HttpResponseError as e:
print("Analyze text failed.")
if e.error:
print(f"Error code: {e.error.code}")
print(f"Error message: {e.error.message}")
raise
print(e)
raise
categories = ["hate_result", "self_harm_result", "sexual_result", "violence_result"]
max_score = -math.inf
for category in categories:
max_score = max(max_score, getattr(response, category).severity)
return max_score >= threshold
model_path = 'microsoft/Orca-2-7b'
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = transformers.AutoModelForCausalLM.from_pretrained(model_path)
model.to(device)
tokenizer = transformers.AutoTokenizer.from_pretrained(
model_path,
model_max_length=4096,
padding_side="right",
use_fast=False,
add_special_tokens=False,
)
system_message = "You are Orca, an AI language model created by Microsoft. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior."
user_message = "\" \n :You can't just say, \"\"that's crap\"\" and remove it without gaining a consensus. You already know this, based on your block history. —/ \" \nIs the comment obscene? \nOptions : Yes, No."
prompt = f"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{user_message}<|im_end|>\n<|im_start|>assistant"
inputs = tokenizer(prompt, return_tensors='pt')
inputs = inputs.to(device)
output_ids = model.generate(inputs["input_ids"], max_length=4096, do_sample=False, temperature=0.0, use_cache=True)
sequence_length = inputs["input_ids"].shape[1]
new_output_ids = output_ids[:, sequence_length:]
answers = tokenizer.batch_decode(new_output_ids, skip_special_tokens=True)
final_output = answers[0] if not should_filter_out(answers[0]) else "[Content Filtered]"
print(final_output)
```
## Citation
```bibtex
@misc{mitra2023orca,
title={Orca 2: Teaching Small Language Models How to Reason},
author={Arindam Mitra and Luciano Del Corro and Shweti Mahajan and Andres Codas and Clarisse Simoes and Sahaj Agrawal and Xuxi Chen and Anastasia Razdaibiedina and Erik Jones and Kriti Aggarwal and Hamid Palangi and Guoqing Zheng and Corby Rosset and Hamed Khanpour and Ahmed Awadallah},
year={2023},
eprint={2311.11045},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
| null |
Non_BioNLP
|
# Orca 2
<!-- Provide a quick summary of what the model is/does. -->
Orca 2 is a helpful assistant that is built for research purposes only and provides a single turn response
in tasks such as reasoning over user given data, reading comprehension, math problem solving and text summarization.
The model is designed to excel particularly in reasoning.
We open-source Orca 2 to encourage further research on the development, evaluation, and alignment of smaller LMs.
## What is Orca 2’s intended use(s)?
+ Orca 2 is built for research purposes only.
+ The main purpose is to allow the research community to assess its abilities and to provide a foundation for building better frontier models.
## How was Orca 2 evaluated?
+ Orca 2 has been evaluated on a large number of tasks ranging from reasoning to grounding and safety. Please refer
to Section 6 and Appendix in the [Orca 2 paper](https://arxiv.org/pdf/2311.11045.pdf) for details on evaluations.
## Model Details
Orca 2 is a finetuned version of LLAMA-2. Orca 2’s training data is a synthetic dataset that was created to enhance the small model’s reasoning abilities.
All synthetic training data was moderated using the Microsoft Azure content filters. More details about the model can be found in the [Orca 2 paper](https://arxiv.org/pdf/2311.11045.pdf).
Please refer to LLaMA-2 technical report for details on the model architecture.
## License
Orca 2 is licensed under the [Microsoft Research License](LICENSE).
Llama 2 is licensed under the [LLAMA 2 Community License](https://ai.meta.com/llama/license/), Copyright © Meta Platforms, Inc. All Rights Reserved.
## Bias, Risks, and Limitations
Orca 2, built upon the LLaMA 2 model family, retains many of its limitations, as well as the
common limitations of other large language models or limitation caused by its training
process, including:
**Data Biases**: Large language models, trained on extensive data, can inadvertently carry
biases present in the source data. Consequently, the models may generate outputs that could
be potentially biased or unfair.
**Lack of Contextual Understanding**: Despite their impressive capabilities in language understanding and generation, these models exhibit limited real-world understanding, resulting
in potential inaccuracies or nonsensical responses.
**Lack of Transparency**: Due to the complexity and size, large language models can act
as “black boxes”, making it difficult to comprehend the rationale behind specific outputs or
decisions. We recommend reviewing transparency notes from Azure for more information.
**Content Harms**: There are various types of content harms that large language models
can cause. It is important to be aware of them when using these models, and to take
actions to prevent them. It is recommended to leverage various content moderation services
provided by different companies and institutions. On an important note, we hope for better
regulations and standards from government and technology leaders around content harms
for AI technologies in future. We value and acknowledge the important role that research
and open source community can play in this direction.
**Hallucination**: It is important to be aware and cautious not to entirely rely on a given
language model for critical decisions or information that might have deep impact as it is
not obvious how to prevent these models from fabricating content. Moreover, it is not clear
whether small models may be more susceptible to hallucination in ungrounded generation
use cases due to their smaller sizes and hence reduced memorization capacities. This is an
active research topic and we hope there will be more rigorous measurement, understanding
and mitigations around this topic.
**Potential for Misuse**: Without suitable safeguards, there is a risk that these models could
be maliciously used for generating disinformation or harmful content.
**Data Distribution**: Orca 2’s performance is likely to correlate strongly with the distribution
of the tuning data. This correlation might limit its accuracy in areas underrepresented in
the training dataset such as math, coding, and reasoning.
**System messages**: Orca 2 demonstrates variance in performance depending on the system
instructions. Additionally, the stochasticity introduced by the model size may lead to
generation of non-deterministic responses to different system instructions.
**Zero-Shot Settings**: Orca 2 was trained on data that mostly simulate zero-shot settings.
While the model demonstrate very strong performance in zero-shot settings, it does not show
the same gains of using few-shot learning compared to other, specially larger, models.
**Synthetic data**: As Orca 2 is trained on synthetic data, it could inherit both the advantages
and shortcomings of the models and methods used for data generation. We posit that Orca
2 benefits from the safety measures incorporated during training and safety guardrails (e.g.,
content filter) within the Azure OpenAI API. However, detailed studies are required for
better quantification of such risks.
This model is solely designed for research settings, and its testing has only been carried
out in such environments. It should not be used in downstream applications, as additional
analysis is needed to assess potential harm or bias in the proposed application.
## Getting started with Orca 2
**Inference with Hugging Face library**
```python
import torch
import transformers
if torch.cuda.is_available():
torch.set_default_device("cuda")
else:
torch.set_default_device("cpu")
model = transformers.AutoModelForCausalLM.from_pretrained("microsoft/Orca-2-7b", device_map='auto')
# https://github.com/huggingface/transformers/issues/27132
# please use the slow tokenizer since fast and slow tokenizer produces different tokens
tokenizer = transformers.AutoTokenizer.from_pretrained(
"microsoft/Orca-2-7b",
use_fast=False,
)
system_message = "You are Orca, an AI language model created by Microsoft. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior."
user_message = "How can you determine if a restaurant is popular among locals or mainly attracts tourists, and why might this information be useful?"
prompt = f"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{user_message}<|im_end|>\n<|im_start|>assistant"
inputs = tokenizer(prompt, return_tensors='pt')
output_ids = model.generate(inputs["input_ids"],)
answer = tokenizer.batch_decode(output_ids)[0]
print(answer)
# This example continues showing how to add a second turn message by the user to the conversation
second_turn_user_message = "Give me a list of the key points of your first answer."
# we set add_special_tokens=False because we dont want to automatically add a bos_token between messages
second_turn_message_in_markup = f"\n<|im_start|>user\n{second_turn_user_message}<|im_end|>\n<|im_start|>assistant"
second_turn_tokens = tokenizer(second_turn_message_in_markup, return_tensors='pt', add_special_tokens=False)
second_turn_input = torch.cat([output_ids, second_turn_tokens['input_ids']], dim=1)
output_ids_2 = model.generate(second_turn_input,)
second_turn_answer = tokenizer.batch_decode(output_ids_2)[0]
print(second_turn_answer)
```
**Safe inference with Azure AI Content Safety**
The usage of [Azure AI Content Safety](https://azure.microsoft.com/en-us/products/ai-services/ai-content-safety/) on top of model prediction is strongly encouraged
and can help preventing some of content harms. Azure AI Content Safety is a content moderation platform
that uses AI to moderate content. By having Azure AI Content Safety on the output of Orca 2,
the model output can be moderated by scanning it for different harm categories including sexual content, violence, hate, and
self-harm with multiple severity levels and multi-lingual detection.
```python
import os
import math
import transformers
import torch
from azure.ai.contentsafety import ContentSafetyClient
from azure.core.credentials import AzureKeyCredential
from azure.core.exceptions import HttpResponseError
from azure.ai.contentsafety.models import AnalyzeTextOptions
CONTENT_SAFETY_KEY = os.environ["CONTENT_SAFETY_KEY"]
CONTENT_SAFETY_ENDPOINT = os.environ["CONTENT_SAFETY_ENDPOINT"]
# We use Azure AI Content Safety to filter out any content that reaches "Medium" threshold
# For more information: https://learn.microsoft.com/en-us/azure/ai-services/content-safety/
def should_filter_out(input_text, threshold=4):
# Create an Content Safety client
client = ContentSafetyClient(CONTENT_SAFETY_ENDPOINT, AzureKeyCredential(CONTENT_SAFETY_KEY))
# Construct a request
request = AnalyzeTextOptions(text=input_text)
# Analyze text
try:
response = client.analyze_text(request)
except HttpResponseError as e:
print("Analyze text failed.")
if e.error:
print(f"Error code: {e.error.code}")
print(f"Error message: {e.error.message}")
raise
print(e)
raise
categories = ["hate_result", "self_harm_result", "sexual_result", "violence_result"]
max_score = -math.inf
for category in categories:
max_score = max(max_score, getattr(response, category).severity)
return max_score >= threshold
model_path = 'microsoft/Orca-2-7b'
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = transformers.AutoModelForCausalLM.from_pretrained(model_path)
model.to(device)
tokenizer = transformers.AutoTokenizer.from_pretrained(
model_path,
model_max_length=4096,
padding_side="right",
use_fast=False,
add_special_tokens=False,
)
system_message = "You are Orca, an AI language model created by Microsoft. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior."
user_message = "\" \n :You can't just say, \"\"that's crap\"\" and remove it without gaining a consensus. You already know this, based on your block history. —/ \" \nIs the comment obscene? \nOptions : Yes, No."
prompt = f"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{user_message}<|im_end|>\n<|im_start|>assistant"
inputs = tokenizer(prompt, return_tensors='pt')
inputs = inputs.to(device)
output_ids = model.generate(inputs["input_ids"], max_length=4096, do_sample=False, temperature=0.0, use_cache=True)
sequence_length = inputs["input_ids"].shape[1]
new_output_ids = output_ids[:, sequence_length:]
answers = tokenizer.batch_decode(new_output_ids, skip_special_tokens=True)
final_output = answers[0] if not should_filter_out(answers[0]) else "[Content Filtered]"
print(final_output)
```
## Citation
```bibtex
@misc{mitra2023orca,
title={Orca 2: Teaching Small Language Models How to Reason},
author={Arindam Mitra and Luciano Del Corro and Shweti Mahajan and Andres Codas and Clarisse Simoes and Sahaj Agrawal and Xuxi Chen and Anastasia Razdaibiedina and Erik Jones and Kriti Aggarwal and Hamid Palangi and Guoqing Zheng and Corby Rosset and Hamed Khanpour and Ahmed Awadallah},
year={2023},
eprint={2311.11045},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
|
{"pipeline_tag": "text-generation", "tags": ["orca", "orca2", "microsoft"]}
|
task
|
[
"SUMMARIZATION"
] | 42,464 |
tyqiangz/xlm-roberta-base-finetuned-chaii
|
tyqiangz
|
question-answering
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2023-12-07T13:40:36+00:00
| 102 | 0 |
---
datasets: []
license: mit
tags:
- generated_from_trainer
model_index:
- name: xlm-roberta-base-finetuned-chaii
results:
- task:
name: Question Answering
type: question-answering
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-chaii
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4651
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.92 | 1.0 | 899 | 0.4482 |
| 0.8055 | 2.0 | 1798 | 0.3225 |
| 0.7485 | 3.0 | 2697 | 0.4651 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-chaii
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4651
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.92 | 1.0 | 899 | 0.4482 |
| 0.8055 | 2.0 | 1798 | 0.3225 |
| 0.7485 | 3.0 | 2697 | 0.4651 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
{"datasets": [], "license": "mit", "tags": ["generated_from_trainer"], "model_index": [{"name": "xlm-roberta-base-finetuned-chaii", "results": [{"task": {"name": "Question Answering", "type": "question-answering"}}]}]}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,465 |
irenelizihui/MarianMT_UFAL_en_fr
|
irenelizihui
|
text2text-generation
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-04-11T19:25:46Z |
2022-04-11T23:03:52+00:00
| 149 | 1 |
---
license: other
---
UFAL English to French Machine Translation Model based on MarianMT model.
| null |
Non_BioNLP
|
UFAL English to French Machine Translation Model based on MarianMT model.
|
{"license": "other"}
|
task
|
[
"TRANSLATION"
] | 42,466 |
aritrasen/bge-base-en-v1.5-ft
|
aritrasen
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:21",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-07-17T14:12:45Z |
2024-07-17T14:13:02+00:00
| 6 | 0 |
---
base_model: BAAI/bge-base-en-v1.5
datasets: []
language: []
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:21
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: '| Config | Model |
Epochs | Max seq length | Micro batch size | Machine | Training runtime | Cost
| Peak memory | Validation loss | Validation perplexity | Multitask score (MMLU)
|
| --------------------------------- | ---------------------- | ------ | --------------
| ---------------- | ------- | ---------------- | ---- | ----------- | ---------------
| --------------------- | --------------- |
| falcon-7b/lora.yaml | falcon-7b | 4 | 512 |
1 | 1xA10G | 24.84 min | $0.7 | 16.69 GB | 0.945 |
2.573 | 26.2% |
| falcon-7b/lora.yaml | falcon-7b | 4 | 512 |
1 | 4xA10G | 24.94 min | $2.0 | 16.69 GB | 0.945 |
2.573 | 26.4% |
| falcon-7b/qlora.yaml | falcon-7b | 4 | 512 |
1 | 1xA10G | 50.85 min | $1.5 | 9.44 GB | 0.993 |
2.699 | 26.3% |
| falcon-7b/qlora.yaml | falcon-7b | 4 | 512 |
1 | 4xA10G | 50.88 min | $4.1 | 9.44 GB | 0.993 |
2.699 | 26.3% |
| | | | | | | | | | | | |
| gemma-2b/full.yaml | gemma-2b | 1 | 512 |
1 | 4xA10G | 14.06 min | $1.1 | 17.43 GB | 1.021 |
2.777 | 32.4% |
| gemma-2b/lora.yaml | gemma-2b | 2 | 512 |
2 | 1xA10G | 9.41 min | $0.3 | 12.62 GB | 0.981 |
2.666 | 34.4% |'
sentences:
- 'What is the command to download the pretrained model weights for the Llama-2-7b-hf
model?
'
- 'What is the version of nvfuser\_cu121 used?
'
- 'What is the training runtime for the gemma-2b model with the lora configuration?
'
- source_sentence: "# Serve and Deploy LLMs\n\nThis document shows how you can serve\
\ a LitGPT for deployment. \n\n \n## Serve an LLM\n\nThis section illustrates\
\ how we can set up an inference server for a phi-2 LLM using `litgpt serve` that\
\ is minimal and highly scalable.\n\n\n \n## Step 1: Start the inference\
\ server\n\n\n```bash\n# 1) Download a pretrained model (alternatively, use your\
\ own finetuned model)\nlitgpt download --repo_id microsoft/phi-2\n\n# 2) Start\
\ the server\nlitgpt serve --checkpoint_dir checkpoints/microsoft/phi-2\n```\n\
\n> [!TIP]\n> Use `litgpt serve --help` to display additional options, including\
\ the port, devices, LLM temperature setting, and more.\n\n\n \n## Step 2:\
\ Query the inference server\n\nYou can now send requests to the inference server\
\ you started in step 2. For example, in a new Python session, we can send requests\
\ to the inference server as follows:\n\n\n```python\nimport requests, json\n\n\
response = requests.post(\n \"http://127.0.0.1:8000/predict\", \n json={\"\
prompt\": \"Fix typos in the following sentence: Exampel input\"}\n)\n\nprint(response.json()[\"\
output\"])\n```\n\nExecuting the code above prints the following output:\n\n```\n\
Instruct: Fix typos in the following sentence: Exampel input\nOutput: Example\
\ input.\n```"
sentences:
- 'What command do I use to convert the finetuned model to a HF transformer model?
'
- 'How do you merge LoRA weights into the original model''s checkpoint?
'
- 'How can I start an inference server for a phi-2 LLM using litgpt serve?
'
---
# SentenceTransformer based on BAAI/bge-base-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("aritrasen/bge-base-en-v1.5-ft")
# Run inference
sentences = [
'# Serve and Deploy LLMs\n\nThis document shows how you can serve a LitGPT for deployment. \n\n \n## Serve an LLM\n\nThis section illustrates how we can set up an inference server for a phi-2 LLM using `litgpt serve` that is minimal and highly scalable.\n\n\n \n## Step 1: Start the inference server\n\n\n```bash\n# 1) Download a pretrained model (alternatively, use your own finetuned model)\nlitgpt download --repo_id microsoft/phi-2\n\n# 2) Start the server\nlitgpt serve --checkpoint_dir checkpoints/microsoft/phi-2\n```\n\n> [!TIP]\n> Use `litgpt serve --help` to display additional options, including the port, devices, LLM temperature setting, and more.\n\n\n \n## Step 2: Query the inference server\n\nYou can now send requests to the inference server you started in step 2. For example, in a new Python session, we can send requests to the inference server as follows:\n\n\n```python\nimport requests, json\n\nresponse = requests.post(\n "http://127.0.0.1:8000/predict", \n json={"prompt": "Fix typos in the following sentence: Exampel input"}\n)\n\nprint(response.json()["output"])\n```\n\nExecuting the code above prints the following output:\n\n```\nInstruct: Fix typos in the following sentence: Exampel input\nOutput: Example input.\n```',
'How can I start an inference server for a phi-2 LLM using litgpt serve?\n',
'What command do I use to convert the finetuned model to a HF transformer model?\n',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 21 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 51 tokens</li><li>mean: 424.62 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 17.19 tokens</li><li>max: 26 tokens</li></ul> |
* Samples:
| anchor | positive |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| <code>| 7 B | Llama 2 | bnb.nf4 | 1 | 4,194,304 | 14.14 GB | 3.68 min |<br>| 7 B | Llama 2 | bnb.nf4-dq | 1 | 4,194,304 | 13.84 GB | 3.83 min |<br>| 7 B | Llama 2 | None | 2 | 4,194,304 | 29.07 GB | 2.52 min |<br>| 7 B | Llama 2 | None | 4 | 4,194,304 | OOM | - |<br>| | | | | | | |<br>| 13 B | Llama 2 | None | 1 | 6,553,600 | 38.12 GB | 3.19 min |<br>| 13 B | Llama 2 | bnb.nf4 | 1 | 6,553,600 | 23.14 GB | 6.38 min |<br>| 13 B | Llama 2 | bnb.nf4-dq | 1 | 6,553,600 | 22.55 GB | 6.55 min |<br>| 13 B | Llama 2 | None | 2 | 6,553,600 | OOM | - |<br>| 13 B | Llama 2 | None | 4 | 6,553,600 | OOM | - |<br>| | | | | | | |<br>| 40 B | Falcon | None | 1 | 12,042,240 | OOM | - |<br>| 40 B | Falcon | bnb.nf4 | 1 | 12,042,240 | OOM | - |<br>| 40 B | Falcon | bnb.nf4-dq | 1 | 12,042,240 | OOM | - |</code> | <code>What is the memory usage of Llama 2 with 7B when using bnb.nf4-dq?<br></code> |
| <code>1. Follow the instructions above to load the model into a Hugging Face transformers model.<br><br>2. Create a `model.safetensor` file:<br><br>```python<br>model.save_pretrained("out/hf-tinyllama/converted/")<br>```<br><br>3. Copy the tokenizer files into the model-containing directory:<br><br>```bash<br>cp checkpoints/$repo_id/tokenizer* out/hf-tinyllama/converted<br>```<br><br>4. Run the evaluation harness, for example:<br><br>```bash<br>lm_eval --model hf \<br> --model_args pretrained=out/hf-tinyllama/converted \<br> --tasks "hellaswag,gsm8k,truthfulqa_mc2,mmlu,winogrande,arc_challenge" \<br> --device "cuda:0" \<br> --batch_size 4<br>```</code> | <code>What is the command to run the evaluation harness?<br></code> |
| <code>The LM Evaluation Harness requires a tokenizer to be present in the model checkpoint folder, which we can copy from the original download checkpoint:<br><br>```bash<br># Copy the tokenizer needed by the Eval Harness<br>cp checkpoints/microsoft/phi-2/tokenizer*<br>out/converted_model<br>```<br><br>Then, we can run the Evaluation Harness as follows:<br><br>```bash<br>lm_eval --model hf \<br> --model_args pretrained="out/converted_model" \<br> --tasks "hellaswag,gsm8k,truthfulqa_mc2,mmlu,winogrande,arc_challenge" \<br> --device "cuda:0" \<br> --batch_size 4<br>```<br><br> <br><br>> [!TIP]<br>> The Evaluation Harness tasks above are those used in Open LLM Leaderboard. You can find a list all supported tasks [here](https://github.com/EleutherAI/lm-evaluation-harness/blob/master/docs/task_table.md).<br><br><br><br> <br>**More information and additional resources**<br><br>- [tutorials/convert_lit_models](./convert_lit_models.md): Tutorial on converting LitGPT weights<br><br><br><br> <br><br>## Get involved!<br><br>We appreciate your feedback and contributions. If you have feature requests, questions, or want to contribute code or config files, please don't hesitate to use the [GitHub Issue](https://github.com/Lightning-AI/litgpt/issues) tracker.<br><br>We welcome all individual contributors, regardless of their level of experience or hardware. Your contributions are valuable, and we are excited to see what you can accomplish in this collaborative and supportive environment.<br><br> <br><br>> [!TIP]<br>> Unsure about contributing? Check out our [How to Contribute to LitGPT](https://lightning.ai/pages/community/tutorial/how-to-contribute-to-litgpt/) guide.<br><br> <br><br>If you have general questions about building with LitGPT, please [join our Discord](https://discord.gg/VptPCZkGNa).</code> | <code>What is the command to run the Evaluation Harness?<br></code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 10 evaluation samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 273 tokens</li><li>mean: 460.8 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 20.1 tokens</li><li>max: 34 tokens</li></ul> |
* Samples:
| anchor | positive |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------|
| <code>(this table was sourced from the author's [README](https://github.com/jzhang38/TinyLlama/))<br><br> <br>## Download datasets<br><br>You can download the data using git lfs:<br><br>```bash<br># Make sure you have git-lfs installed (https://git-lfs.com):<br>sudo apt install git-lfs<br>```<br><br>```bash<br>git clone https://huggingface.co/datasets/cerebras/slimpajama-627b data/slimpajama-raw<br>git clone https://huggingface.co/datasets/bigcode/starcoderdata data/starcoderdata-raw<br>```<br><br>Around 1.2 TB of disk space is required to store both datasets.<br><br> <br>## Prepare the datasets for training<br><br>In order to start pretraining litgpt on it, you need to read, tokenize, and write the data in binary chunks. This will leverage the `litdata` optimization pipeline and streaming dataset.<br><br>First, install additional dependencies for preprocessing:<br><br>```bash<br>pip install '.[all]'<br>```<br><br>You will need to have the tokenizer config available:<br><br>```bash<br>litgpt download \<br> --repo_id meta-llama/Llama-2-7b-hf \<br> --access_token your_hf_token \<br> --tokenizer_only true<br>```<br><br>Then, run the preprocessing script for each dataset and split.<br>You will require **1.1 TB** of disk space for Starcoder and **2.5** TB of space for the SlimPajama dataset.<br><br>**Starcoder:**<br><br>```bash<br>python litgpt/data/prepare_starcoder.py \<br> --input_dir data/starcoderdata-raw \<br> --output_dir data/starcoder \<br> --tokenizer_path checkpoints/meta-llama/Llama-2-7b-hf<br>```<br><br>**SlimPajama:**<br><br>```bash<br>python litgpt/data/prepare_slimpajama.py \<br> --input_dir data/slimpajama-raw/validation \<br> --output_dir data/slimpajama/val \<br> --tokenizer_path checkpoints/meta-llama/Llama-2-7b-hf<br><br>python litgpt/data/prepare_slimpajama.py \<br> --input_dir data/slimpajama-raw/test \<br> --output_dir data/slimpajama/test \<br> --tokenizer_path checkpoints/meta-llama/Llama-2-7b-hf<br><br>python litgpt/data/prepare_slimpajama.py \<br> --input_dir data/slimpajama-raw/train \<br> --output_dir data/slimpajama/train \<br> --tokenizer_path checkpoints/meta-llama/Llama-2-7b-hf<br>```</code> | <code>How much disk space is required to store the SlimPajama dataset?<br></code> |
| <code># Serve and Deploy LLMs<br><br>This document shows how you can serve a LitGPT for deployment. <br><br> <br>## Serve an LLM<br><br>This section illustrates how we can set up an inference server for a phi-2 LLM using `litgpt serve` that is minimal and highly scalable.<br><br><br> <br>## Step 1: Start the inference server<br><br><br>```bash<br># 1) Download a pretrained model (alternatively, use your own finetuned model)<br>litgpt download --repo_id microsoft/phi-2<br><br># 2) Start the server<br>litgpt serve --checkpoint_dir checkpoints/microsoft/phi-2<br>```<br><br>> [!TIP]<br>> Use `litgpt serve --help` to display additional options, including the port, devices, LLM temperature setting, and more.<br><br><br> <br>## Step 2: Query the inference server<br><br>You can now send requests to the inference server you started in step 2. For example, in a new Python session, we can send requests to the inference server as follows:<br><br><br>```python<br>import requests, json<br><br>response = requests.post(<br> "http://127.0.0.1:8000/predict", <br> json={"prompt": "Fix typos in the following sentence: Exampel input"}<br>)<br><br>print(response.json()["output"])<br>```<br><br>Executing the code above prints the following output:<br><br>```<br>Instruct: Fix typos in the following sentence: Exampel input<br>Output: Example input.<br>```</code> | <code>How can I start an inference server for a phi-2 LLM using litgpt serve?<br></code> |
| <code># TPU support<br><br>This project utilizes [`Fabric`](https://lightning.ai/docs/fabric/stable), which supports TPUs via [PyTorch XLA](https://github.com/pytorch/xla).<br><br>> [!NOTE]<br>> This guide assumes that you have already set-up your [Google Cloud environment](https://cloud.google.com/run/docs/setup).<br><br>To set up a Google Cloud instance with a TPU v4 VM, run the following commands:<br><br>```shell<br>gcloud compute tpus tpu-vm create litgpt --version=tpu-vm-v4-base --accelerator-type=v4-8 --zone=us-central2-b<br>gcloud compute tpus tpu-vm ssh litgpt --zone=us-central2-b<br>```<br><br>You can also choose a different TPU type. To do so, change the `version`, `accelerator-type`, and `zone` arguments. Find all regions and zones [here](https://cloud.google.com/tpu/docs/regions-zones).<br><br><details><br><summary>Multihost caveats</summary><br><br>TPU v4-8 uses a single host. SSH'ing into the machine and running commands manually will only work when using a single host (1 slice in the TPU pod).<br>In multi-host environments, such as larger TPU pod slices, it's necessary to launch all commands on all hosts simultaneously to avoid hangs.<br>For local development, it is advisable to upload a zip file containing all your current changes and execute it inside the VM from your personal computer:<br><br>```shell<br># Zip the local directory, excluding large directories from the zip. You may want to keep them.<br>zip -r local_changes.zip . -x ".git/*" "checkpoints/*" "data/*" "out/*"<br># Copy the .zip file to the TPU VM<br>gcloud compute tpus tpu-vm scp --worker=all local_changes.zip "litgpt:~"<br># Unzip on each host<br>gcloud compute tpus tpu-vm ssh litgpt --worker=all --command="cd ~; unzip -q -o local_changes.zip"<br><br># Example of a typical workflow<br>gcloud compute tpus tpu-vm ssh tmp --worker=all --command="cd ~; bash install_dependencies.sh"<br>gcloud compute tpus tpu-vm ssh tmp --worker=all --command="cd ~; bash prepare_checkpoints.sh"<br>gcloud compute tpus tpu-vm ssh tmp --worker=all --command="cd ~; bash run_desired_script.sh"</code> | <code>How does this project support TPUs?<br></code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 5
- `per_device_eval_batch_size`: 5
- `num_train_epochs`: 5
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 5
- `per_device_eval_batch_size`: 5
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss |
|:-----:|:----:|:-------------:|:------:|
| 0.4 | 2 | 0.6407 | 0.4190 |
| 0.8 | 4 | 0.7873 | 0.2789 |
| 1.2 | 6 | 0.1871 | 0.2089 |
| 1.6 | 8 | 0.2125 | 0.1718 |
| 2.0 | 10 | 0.0374 | 0.1648 |
| 2.4 | 12 | 0.1923 | 0.1695 |
| 2.8 | 14 | 0.0183 | 0.1723 |
| 3.2 | 16 | 0.1582 | 0.1770 |
| 3.6 | 18 | 0.0032 | 0.1824 |
| 4.0 | 20 | 0.0015 | 0.1870 |
| 4.4 | 22 | 0.1399 | 0.1901 |
| 4.8 | 24 | 0.002 | 0.1914 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.3.0+cu121
- Accelerate: 0.27.0
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on BAAI/bge-base-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("aritrasen/bge-base-en-v1.5-ft")
# Run inference
sentences = [
'# Serve and Deploy LLMs\n\nThis document shows how you can serve a LitGPT for deployment. \n\n \n## Serve an LLM\n\nThis section illustrates how we can set up an inference server for a phi-2 LLM using `litgpt serve` that is minimal and highly scalable.\n\n\n \n## Step 1: Start the inference server\n\n\n```bash\n# 1) Download a pretrained model (alternatively, use your own finetuned model)\nlitgpt download --repo_id microsoft/phi-2\n\n# 2) Start the server\nlitgpt serve --checkpoint_dir checkpoints/microsoft/phi-2\n```\n\n> [!TIP]\n> Use `litgpt serve --help` to display additional options, including the port, devices, LLM temperature setting, and more.\n\n\n \n## Step 2: Query the inference server\n\nYou can now send requests to the inference server you started in step 2. For example, in a new Python session, we can send requests to the inference server as follows:\n\n\n```python\nimport requests, json\n\nresponse = requests.post(\n "http://127.0.0.1:8000/predict", \n json={"prompt": "Fix typos in the following sentence: Exampel input"}\n)\n\nprint(response.json()["output"])\n```\n\nExecuting the code above prints the following output:\n\n```\nInstruct: Fix typos in the following sentence: Exampel input\nOutput: Example input.\n```',
'How can I start an inference server for a phi-2 LLM using litgpt serve?\n',
'What command do I use to convert the finetuned model to a HF transformer model?\n',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 21 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 51 tokens</li><li>mean: 424.62 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 17.19 tokens</li><li>max: 26 tokens</li></ul> |
* Samples:
| anchor | positive |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| <code>| 7 B | Llama 2 | bnb.nf4 | 1 | 4,194,304 | 14.14 GB | 3.68 min |<br>| 7 B | Llama 2 | bnb.nf4-dq | 1 | 4,194,304 | 13.84 GB | 3.83 min |<br>| 7 B | Llama 2 | None | 2 | 4,194,304 | 29.07 GB | 2.52 min |<br>| 7 B | Llama 2 | None | 4 | 4,194,304 | OOM | - |<br>| | | | | | | |<br>| 13 B | Llama 2 | None | 1 | 6,553,600 | 38.12 GB | 3.19 min |<br>| 13 B | Llama 2 | bnb.nf4 | 1 | 6,553,600 | 23.14 GB | 6.38 min |<br>| 13 B | Llama 2 | bnb.nf4-dq | 1 | 6,553,600 | 22.55 GB | 6.55 min |<br>| 13 B | Llama 2 | None | 2 | 6,553,600 | OOM | - |<br>| 13 B | Llama 2 | None | 4 | 6,553,600 | OOM | - |<br>| | | | | | | |<br>| 40 B | Falcon | None | 1 | 12,042,240 | OOM | - |<br>| 40 B | Falcon | bnb.nf4 | 1 | 12,042,240 | OOM | - |<br>| 40 B | Falcon | bnb.nf4-dq | 1 | 12,042,240 | OOM | - |</code> | <code>What is the memory usage of Llama 2 with 7B when using bnb.nf4-dq?<br></code> |
| <code>1. Follow the instructions above to load the model into a Hugging Face transformers model.<br><br>2. Create a `model.safetensor` file:<br><br>```python<br>model.save_pretrained("out/hf-tinyllama/converted/")<br>```<br><br>3. Copy the tokenizer files into the model-containing directory:<br><br>```bash<br>cp checkpoints/$repo_id/tokenizer* out/hf-tinyllama/converted<br>```<br><br>4. Run the evaluation harness, for example:<br><br>```bash<br>lm_eval --model hf \<br> --model_args pretrained=out/hf-tinyllama/converted \<br> --tasks "hellaswag,gsm8k,truthfulqa_mc2,mmlu,winogrande,arc_challenge" \<br> --device "cuda:0" \<br> --batch_size 4<br>```</code> | <code>What is the command to run the evaluation harness?<br></code> |
| <code>The LM Evaluation Harness requires a tokenizer to be present in the model checkpoint folder, which we can copy from the original download checkpoint:<br><br>```bash<br># Copy the tokenizer needed by the Eval Harness<br>cp checkpoints/microsoft/phi-2/tokenizer*<br>out/converted_model<br>```<br><br>Then, we can run the Evaluation Harness as follows:<br><br>```bash<br>lm_eval --model hf \<br> --model_args pretrained="out/converted_model" \<br> --tasks "hellaswag,gsm8k,truthfulqa_mc2,mmlu,winogrande,arc_challenge" \<br> --device "cuda:0" \<br> --batch_size 4<br>```<br><br> <br><br>> [!TIP]<br>> The Evaluation Harness tasks above are those used in Open LLM Leaderboard. You can find a list all supported tasks [here](https://github.com/EleutherAI/lm-evaluation-harness/blob/master/docs/task_table.md).<br><br><br><br> <br>**More information and additional resources**<br><br>- [tutorials/convert_lit_models](./convert_lit_models.md): Tutorial on converting LitGPT weights<br><br><br><br> <br><br>## Get involved!<br><br>We appreciate your feedback and contributions. If you have feature requests, questions, or want to contribute code or config files, please don't hesitate to use the [GitHub Issue](https://github.com/Lightning-AI/litgpt/issues) tracker.<br><br>We welcome all individual contributors, regardless of their level of experience or hardware. Your contributions are valuable, and we are excited to see what you can accomplish in this collaborative and supportive environment.<br><br> <br><br>> [!TIP]<br>> Unsure about contributing? Check out our [How to Contribute to LitGPT](https://lightning.ai/pages/community/tutorial/how-to-contribute-to-litgpt/) guide.<br><br> <br><br>If you have general questions about building with LitGPT, please [join our Discord](https://discord.gg/VptPCZkGNa).</code> | <code>What is the command to run the Evaluation Harness?<br></code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 10 evaluation samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 273 tokens</li><li>mean: 460.8 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 20.1 tokens</li><li>max: 34 tokens</li></ul> |
* Samples:
| anchor | positive |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------|
| <code>(this table was sourced from the author's [README](https://github.com/jzhang38/TinyLlama/))<br><br> <br>## Download datasets<br><br>You can download the data using git lfs:<br><br>```bash<br># Make sure you have git-lfs installed (https://git-lfs.com):<br>sudo apt install git-lfs<br>```<br><br>```bash<br>git clone https://huggingface.co/datasets/cerebras/slimpajama-627b data/slimpajama-raw<br>git clone https://huggingface.co/datasets/bigcode/starcoderdata data/starcoderdata-raw<br>```<br><br>Around 1.2 TB of disk space is required to store both datasets.<br><br> <br>## Prepare the datasets for training<br><br>In order to start pretraining litgpt on it, you need to read, tokenize, and write the data in binary chunks. This will leverage the `litdata` optimization pipeline and streaming dataset.<br><br>First, install additional dependencies for preprocessing:<br><br>```bash<br>pip install '.[all]'<br>```<br><br>You will need to have the tokenizer config available:<br><br>```bash<br>litgpt download \<br> --repo_id meta-llama/Llama-2-7b-hf \<br> --access_token your_hf_token \<br> --tokenizer_only true<br>```<br><br>Then, run the preprocessing script for each dataset and split.<br>You will require **1.1 TB** of disk space for Starcoder and **2.5** TB of space for the SlimPajama dataset.<br><br>**Starcoder:**<br><br>```bash<br>python litgpt/data/prepare_starcoder.py \<br> --input_dir data/starcoderdata-raw \<br> --output_dir data/starcoder \<br> --tokenizer_path checkpoints/meta-llama/Llama-2-7b-hf<br>```<br><br>**SlimPajama:**<br><br>```bash<br>python litgpt/data/prepare_slimpajama.py \<br> --input_dir data/slimpajama-raw/validation \<br> --output_dir data/slimpajama/val \<br> --tokenizer_path checkpoints/meta-llama/Llama-2-7b-hf<br><br>python litgpt/data/prepare_slimpajama.py \<br> --input_dir data/slimpajama-raw/test \<br> --output_dir data/slimpajama/test \<br> --tokenizer_path checkpoints/meta-llama/Llama-2-7b-hf<br><br>python litgpt/data/prepare_slimpajama.py \<br> --input_dir data/slimpajama-raw/train \<br> --output_dir data/slimpajama/train \<br> --tokenizer_path checkpoints/meta-llama/Llama-2-7b-hf<br>```</code> | <code>How much disk space is required to store the SlimPajama dataset?<br></code> |
| <code># Serve and Deploy LLMs<br><br>This document shows how you can serve a LitGPT for deployment. <br><br> <br>## Serve an LLM<br><br>This section illustrates how we can set up an inference server for a phi-2 LLM using `litgpt serve` that is minimal and highly scalable.<br><br><br> <br>## Step 1: Start the inference server<br><br><br>```bash<br># 1) Download a pretrained model (alternatively, use your own finetuned model)<br>litgpt download --repo_id microsoft/phi-2<br><br># 2) Start the server<br>litgpt serve --checkpoint_dir checkpoints/microsoft/phi-2<br>```<br><br>> [!TIP]<br>> Use `litgpt serve --help` to display additional options, including the port, devices, LLM temperature setting, and more.<br><br><br> <br>## Step 2: Query the inference server<br><br>You can now send requests to the inference server you started in step 2. For example, in a new Python session, we can send requests to the inference server as follows:<br><br><br>```python<br>import requests, json<br><br>response = requests.post(<br> "http://127.0.0.1:8000/predict", <br> json={"prompt": "Fix typos in the following sentence: Exampel input"}<br>)<br><br>print(response.json()["output"])<br>```<br><br>Executing the code above prints the following output:<br><br>```<br>Instruct: Fix typos in the following sentence: Exampel input<br>Output: Example input.<br>```</code> | <code>How can I start an inference server for a phi-2 LLM using litgpt serve?<br></code> |
| <code># TPU support<br><br>This project utilizes [`Fabric`](https://lightning.ai/docs/fabric/stable), which supports TPUs via [PyTorch XLA](https://github.com/pytorch/xla).<br><br>> [!NOTE]<br>> This guide assumes that you have already set-up your [Google Cloud environment](https://cloud.google.com/run/docs/setup).<br><br>To set up a Google Cloud instance with a TPU v4 VM, run the following commands:<br><br>```shell<br>gcloud compute tpus tpu-vm create litgpt --version=tpu-vm-v4-base --accelerator-type=v4-8 --zone=us-central2-b<br>gcloud compute tpus tpu-vm ssh litgpt --zone=us-central2-b<br>```<br><br>You can also choose a different TPU type. To do so, change the `version`, `accelerator-type`, and `zone` arguments. Find all regions and zones [here](https://cloud.google.com/tpu/docs/regions-zones).<br><br><details><br><summary>Multihost caveats</summary><br><br>TPU v4-8 uses a single host. SSH'ing into the machine and running commands manually will only work when using a single host (1 slice in the TPU pod).<br>In multi-host environments, such as larger TPU pod slices, it's necessary to launch all commands on all hosts simultaneously to avoid hangs.<br>For local development, it is advisable to upload a zip file containing all your current changes and execute it inside the VM from your personal computer:<br><br>```shell<br># Zip the local directory, excluding large directories from the zip. You may want to keep them.<br>zip -r local_changes.zip . -x ".git/*" "checkpoints/*" "data/*" "out/*"<br># Copy the .zip file to the TPU VM<br>gcloud compute tpus tpu-vm scp --worker=all local_changes.zip "litgpt:~"<br># Unzip on each host<br>gcloud compute tpus tpu-vm ssh litgpt --worker=all --command="cd ~; unzip -q -o local_changes.zip"<br><br># Example of a typical workflow<br>gcloud compute tpus tpu-vm ssh tmp --worker=all --command="cd ~; bash install_dependencies.sh"<br>gcloud compute tpus tpu-vm ssh tmp --worker=all --command="cd ~; bash prepare_checkpoints.sh"<br>gcloud compute tpus tpu-vm ssh tmp --worker=all --command="cd ~; bash run_desired_script.sh"</code> | <code>How does this project support TPUs?<br></code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 5
- `per_device_eval_batch_size`: 5
- `num_train_epochs`: 5
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 5
- `per_device_eval_batch_size`: 5
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss |
|:-----:|:----:|:-------------:|:------:|
| 0.4 | 2 | 0.6407 | 0.4190 |
| 0.8 | 4 | 0.7873 | 0.2789 |
| 1.2 | 6 | 0.1871 | 0.2089 |
| 1.6 | 8 | 0.2125 | 0.1718 |
| 2.0 | 10 | 0.0374 | 0.1648 |
| 2.4 | 12 | 0.1923 | 0.1695 |
| 2.8 | 14 | 0.0183 | 0.1723 |
| 3.2 | 16 | 0.1582 | 0.1770 |
| 3.6 | 18 | 0.0032 | 0.1824 |
| 4.0 | 20 | 0.0015 | 0.1870 |
| 4.4 | 22 | 0.1399 | 0.1901 |
| 4.8 | 24 | 0.002 | 0.1914 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.3.0+cu121
- Accelerate: 0.27.0
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "BAAI/bge-base-en-v1.5", "datasets": [], "language": [], "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:21", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "| Config | Model | Epochs | Max seq length | Micro batch size | Machine | Training runtime | Cost | Peak memory | Validation loss | Validation perplexity | Multitask score (MMLU) |\n| --------------------------------- | ---------------------- | ------ | -------------- | ---------------- | ------- | ---------------- | ---- | ----------- | --------------- | --------------------- | --------------- |\n| falcon-7b/lora.yaml | falcon-7b | 4 | 512 | 1 | 1xA10G | 24.84 min | $0.7 | 16.69 GB | 0.945 | 2.573 | 26.2% |\n| falcon-7b/lora.yaml | falcon-7b | 4 | 512 | 1 | 4xA10G | 24.94 min | $2.0 | 16.69 GB | 0.945 | 2.573 | 26.4% |\n| falcon-7b/qlora.yaml | falcon-7b | 4 | 512 | 1 | 1xA10G | 50.85 min | $1.5 | 9.44 GB | 0.993 | 2.699 | 26.3% |\n| falcon-7b/qlora.yaml | falcon-7b | 4 | 512 | 1 | 4xA10G | 50.88 min | $4.1 | 9.44 GB | 0.993 | 2.699 | 26.3% |\n| | | | | | | | | | | | |\n| gemma-2b/full.yaml | gemma-2b | 1 | 512 | 1 | 4xA10G | 14.06 min | $1.1 | 17.43 GB | 1.021 | 2.777 | 32.4% |\n| gemma-2b/lora.yaml | gemma-2b | 2 | 512 | 2 | 1xA10G | 9.41 min | $0.3 | 12.62 GB | 0.981 | 2.666 | 34.4% |", "sentences": ["What is the command to download the pretrained model weights for the Llama-2-7b-hf model?\n", "What is the version of nvfuser\\_cu121 used?\n", "What is the training runtime for the gemma-2b model with the lora configuration?\n"]}, {"source_sentence": "# Serve and Deploy LLMs\n\nThis document shows how you can serve a LitGPT for deployment. \n\n \n## Serve an LLM\n\nThis section illustrates how we can set up an inference server for a phi-2 LLM using `litgpt serve` that is minimal and highly scalable.\n\n\n \n## Step 1: Start the inference server\n\n\n```bash\n# 1) Download a pretrained model (alternatively, use your own finetuned model)\nlitgpt download --repo_id microsoft/phi-2\n\n# 2) Start the server\nlitgpt serve --checkpoint_dir checkpoints/microsoft/phi-2\n```\n\n> [!TIP]\n> Use `litgpt serve --help` to display additional options, including the port, devices, LLM temperature setting, and more.\n\n\n \n## Step 2: Query the inference server\n\nYou can now send requests to the inference server you started in step 2. For example, in a new Python session, we can send requests to the inference server as follows:\n\n\n```python\nimport requests, json\n\nresponse = requests.post(\n \"http://127.0.0.1:8000/predict\", \n json={\"prompt\": \"Fix typos in the following sentence: Exampel input\"}\n)\n\nprint(response.json()[\"output\"])\n```\n\nExecuting the code above prints the following output:\n\n```\nInstruct: Fix typos in the following sentence: Exampel input\nOutput: Example input.\n```", "sentences": ["What command do I use to convert the finetuned model to a HF transformer model?\n", "How do you merge LoRA weights into the original model's checkpoint?\n", "How can I start an inference server for a phi-2 LLM using litgpt serve?\n"]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,467 |
MaLA-LM/lucky52-bloom-7b1-no-22
|
MaLA-LM
|
text-generation
|
[
"transformers",
"pytorch",
"bloom",
"text-generation",
"generation",
"question answering",
"instruction tuning",
"multilingual",
"dataset:MBZUAI/Bactrian-X",
"arxiv:2404.04850",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-04-04T10:40:45Z |
2025-04-08T17:04:17+00:00
| 12 | 0 |
---
datasets:
- MBZUAI/Bactrian-X
language:
- multilingual
library_name: transformers
license: cc-by-nc-4.0
pipeline_tag: text-generation
tags:
- generation
- question answering
- instruction tuning
---
### Model Description
This HF repository hosts instruction fine-tuned multilingual BLOOM model using the parallel instruction dataset called Bactrain-X in 52 languages.
We progressively add a language during instruction fine-tuning at each time, and train 52 models in total. Then, we evaluate those models in three multilingual benchmarks.
Please refer to [our paper](https://arxiv.org/abs/2404.04850) for more details.
* Base model: [BLOOM 7B1](https://huggingface.co/bigscience/bloom-7b1)
* Instruction languages: English, Chinese, Afrikaans, Arabic, Azerbaijani, Bengali, Czech, German, Spanish, Estonian, Farsi, Finnish, French, Galician, Gujarati, Hebrew, Hindi, Croatian, Indonesian, Italian, Japanese, Georgian
* Instruction language codes: en, zh, af, ar, az, bn, cs, de, es, et, fa, fi, fr, gl, gu, he, hi, hr, id, it, ja, ka
* Training method: full-parameter fine-tuning.
### Usage
The model checkpoint should be loaded using `transformers` library.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-22")
model = AutoModelForCausalLM.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-22")
```
### Citation
```
@inproceedings{ji2025lucky52,
title={How Many Languages Make Good Multilingual Instruction Tuning? A Case Study on BLOOM},
author={Shaoxiong Ji and Pinzhen Chen},
year={2025},
booktitle={Proceedings of COLING},
url={https://arxiv.org/abs/2404.04850},
}
```
| null |
Non_BioNLP
|
### Model Description
This HF repository hosts instruction fine-tuned multilingual BLOOM model using the parallel instruction dataset called Bactrain-X in 52 languages.
We progressively add a language during instruction fine-tuning at each time, and train 52 models in total. Then, we evaluate those models in three multilingual benchmarks.
Please refer to [our paper](https://arxiv.org/abs/2404.04850) for more details.
* Base model: [BLOOM 7B1](https://huggingface.co/bigscience/bloom-7b1)
* Instruction languages: English, Chinese, Afrikaans, Arabic, Azerbaijani, Bengali, Czech, German, Spanish, Estonian, Farsi, Finnish, French, Galician, Gujarati, Hebrew, Hindi, Croatian, Indonesian, Italian, Japanese, Georgian
* Instruction language codes: en, zh, af, ar, az, bn, cs, de, es, et, fa, fi, fr, gl, gu, he, hi, hr, id, it, ja, ka
* Training method: full-parameter fine-tuning.
### Usage
The model checkpoint should be loaded using `transformers` library.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-22")
model = AutoModelForCausalLM.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-22")
```
### Citation
```
@inproceedings{ji2025lucky52,
title={How Many Languages Make Good Multilingual Instruction Tuning? A Case Study on BLOOM},
author={Shaoxiong Ji and Pinzhen Chen},
year={2025},
booktitle={Proceedings of COLING},
url={https://arxiv.org/abs/2404.04850},
}
```
|
{"datasets": ["MBZUAI/Bactrian-X"], "language": ["multilingual"], "library_name": "transformers", "license": "cc-by-nc-4.0", "pipeline_tag": "text-generation", "tags": ["generation", "question answering", "instruction tuning"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,468 |
google/t5-efficient-xl-nl16
|
google
|
text2text-generation
|
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | 2022-03-02T23:29:05Z |
2023-01-24T16:51:42+00:00
| 16 | 0 |
---
datasets:
- c4
language:
- en
license: apache-2.0
tags:
- deep-narrow
inference: false
---
# T5-Efficient-XL-NL16 (Deep-Narrow version)
T5-Efficient-XL-NL16 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-xl-nl16** - is of model type **Xl** with the following variations:
- **nl** is **16**
It has **1912.07** million parameters and thus requires *ca.* **7648.29 MB** of memory in full precision (*fp32*)
or **3824.14 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
| null |
Non_BioNLP
|
# T5-Efficient-XL-NL16 (Deep-Narrow version)
T5-Efficient-XL-NL16 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-xl-nl16** - is of model type **Xl** with the following variations:
- **nl** is **16**
It has **1912.07** million parameters and thus requires *ca.* **7648.29 MB** of memory in full precision (*fp32*)
or **3824.14 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"datasets": ["c4"], "language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "inference": false}
|
task
|
[
"TEXT_CLASSIFICATION",
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 42,469 |
SeyedAli/Persian-Text-paraphraser-mT5-V1
|
SeyedAli
|
text2text-generation
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"fa",
"dataset:SeyedAli/Persian-Text-Paraphrasing",
"base_model:google/mt5-base",
"base_model:finetune:google/mt5-base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-09-11T18:46:35Z |
2023-11-18T20:01:27+00:00
| 193 | 1 |
---
base_model: google/mt5-base
datasets:
- SeyedAli/Persian-Text-Paraphrasing
language:
- fa
tags:
- generated_from_trainer
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Persian-Text-paraphraser-mT5
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on [SeyedAli/Persian-Text-Paraphrasing](https://huggingface.co/datasets/SeyedAli/Persian-Text-Paraphrasing) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2334
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 160 | 0.4026 |
| No log | 2.0 | 320 | 0.2853 |
| No log | 3.0 | 480 | 0.2543 |
| 1.5981 | 4.0 | 640 | 0.2466 |
| 1.5981 | 5.0 | 800 | 0.2408 |
| 1.5981 | 6.0 | 960 | 0.2367 |
| 0.3535 | 7.0 | 1120 | 0.2363 |
| 0.3535 | 8.0 | 1280 | 0.2344 |
| 0.3535 | 9.0 | 1440 | 0.2334 |
| 0.3151 | 10.0 | 1600 | 0.2334 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Persian-Text-paraphraser-mT5
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on [SeyedAli/Persian-Text-Paraphrasing](https://huggingface.co/datasets/SeyedAli/Persian-Text-Paraphrasing) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2334
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 160 | 0.4026 |
| No log | 2.0 | 320 | 0.2853 |
| No log | 3.0 | 480 | 0.2543 |
| 1.5981 | 4.0 | 640 | 0.2466 |
| 1.5981 | 5.0 | 800 | 0.2408 |
| 1.5981 | 6.0 | 960 | 0.2367 |
| 0.3535 | 7.0 | 1120 | 0.2363 |
| 0.3535 | 8.0 | 1280 | 0.2344 |
| 0.3535 | 9.0 | 1440 | 0.2334 |
| 0.3151 | 10.0 | 1600 | 0.2334 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
{"base_model": "google/mt5-base", "datasets": ["SeyedAli/Persian-Text-Paraphrasing"], "language": ["fa"], "tags": ["generated_from_trainer"], "model-index": [{"name": "output", "results": []}]}
|
task
|
[
"PARAPHRASING"
] | 42,470 |
laituan245/molt5-large
|
laituan245
|
text2text-generation
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv:2204.11817",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2022-05-03T17:20:12Z |
2022-05-03T18:06:08+00:00
| 219 | 0 |
---
license: apache-2.0
---
## Example Usage
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("laituan245/molt5-large", model_max_length=512)
model = T5ForConditionalGeneration.from_pretrained('laituan245/molt5-large')
```
## Paper
For more information, please take a look at our paper.
Paper: [Translation between Molecules and Natural Language](https://arxiv.org/abs/2204.11817)
Authors: *Carl Edwards\*, Tuan Lai\*, Kevin Ros, Garrett Honke, Heng Ji*
| null |
Non_BioNLP
|
## Example Usage
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("laituan245/molt5-large", model_max_length=512)
model = T5ForConditionalGeneration.from_pretrained('laituan245/molt5-large')
```
## Paper
For more information, please take a look at our paper.
Paper: [Translation between Molecules and Natural Language](https://arxiv.org/abs/2204.11817)
Authors: *Carl Edwards\*, Tuan Lai\*, Kevin Ros, Garrett Honke, Heng Ji*
|
{"license": "apache-2.0"}
|
task
|
[
"TRANSLATION"
] | 42,471 |
pakawadeep/mt5-base-finetuned-ctfl-backtranslation_7k
|
pakawadeep
|
text2text-generation
|
[
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google/mt5-base",
"base_model:finetune:google/mt5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-10-14T06:16:42Z |
2024-10-14T11:36:55+00:00
| 4 | 0 |
---
base_model: google/mt5-base
library_name: transformers
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: pakawadeep/mt5-base-finetuned-ctfl-backtranslation_7k
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# pakawadeep/mt5-base-finetuned-ctfl-backtranslation_7k
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.0781
- Validation Loss: 4.5021
- Train Bleu: 0.0
- Train Gen Len: 21.0
- Epoch: 16
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Bleu | Train Gen Len | Epoch |
|:----------:|:---------------:|:----------:|:-------------:|:-----:|
| 16.3214 | 7.9429 | 0.0008 | 127.0 | 0 |
| 9.3082 | 7.3301 | 0.0 | 127.0 | 1 |
| 8.1593 | 7.1451 | 0.0 | 127.0 | 2 |
| 7.7440 | 7.0581 | 0.0 | 8.0 | 3 |
| 7.5624 | 7.0120 | 0.0 | 3.0 | 4 |
| 7.4569 | 6.9663 | 0.0 | 3.0 | 5 |
| 7.3873 | 6.9154 | 0.0001 | 114.0 | 6 |
| 7.3206 | 6.8373 | 0.0001 | 127.0 | 7 |
| 7.2175 | 6.7063 | 0.0001 | 127.0 | 8 |
| 7.0942 | 6.4250 | 0.0003 | 127.0 | 9 |
| 6.8748 | 6.0066 | 0.0003 | 127.0 | 10 |
| 6.5383 | 5.5921 | 0.0003 | 127.0 | 11 |
| 6.2094 | 5.2557 | 0.0003 | 127.0 | 12 |
| 5.8157 | 4.9725 | 0.0003 | 127.0 | 13 |
| 5.5098 | 4.7642 | 0.0003 | 127.0 | 14 |
| 5.2683 | 4.6171 | 0.0017 | 127.0 | 15 |
| 5.0781 | 4.5021 | 0.0 | 21.0 | 16 |
### Framework versions
- Transformers 4.44.2
- TensorFlow 2.17.0
- Datasets 3.0.1
- Tokenizers 0.19.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# pakawadeep/mt5-base-finetuned-ctfl-backtranslation_7k
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.0781
- Validation Loss: 4.5021
- Train Bleu: 0.0
- Train Gen Len: 21.0
- Epoch: 16
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Bleu | Train Gen Len | Epoch |
|:----------:|:---------------:|:----------:|:-------------:|:-----:|
| 16.3214 | 7.9429 | 0.0008 | 127.0 | 0 |
| 9.3082 | 7.3301 | 0.0 | 127.0 | 1 |
| 8.1593 | 7.1451 | 0.0 | 127.0 | 2 |
| 7.7440 | 7.0581 | 0.0 | 8.0 | 3 |
| 7.5624 | 7.0120 | 0.0 | 3.0 | 4 |
| 7.4569 | 6.9663 | 0.0 | 3.0 | 5 |
| 7.3873 | 6.9154 | 0.0001 | 114.0 | 6 |
| 7.3206 | 6.8373 | 0.0001 | 127.0 | 7 |
| 7.2175 | 6.7063 | 0.0001 | 127.0 | 8 |
| 7.0942 | 6.4250 | 0.0003 | 127.0 | 9 |
| 6.8748 | 6.0066 | 0.0003 | 127.0 | 10 |
| 6.5383 | 5.5921 | 0.0003 | 127.0 | 11 |
| 6.2094 | 5.2557 | 0.0003 | 127.0 | 12 |
| 5.8157 | 4.9725 | 0.0003 | 127.0 | 13 |
| 5.5098 | 4.7642 | 0.0003 | 127.0 | 14 |
| 5.2683 | 4.6171 | 0.0017 | 127.0 | 15 |
| 5.0781 | 4.5021 | 0.0 | 21.0 | 16 |
### Framework versions
- Transformers 4.44.2
- TensorFlow 2.17.0
- Datasets 3.0.1
- Tokenizers 0.19.1
|
{"base_model": "google/mt5-base", "library_name": "transformers", "license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "pakawadeep/mt5-base-finetuned-ctfl-backtranslation_7k", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 42,472 |
cwchang/formatted_address
|
cwchang
|
text2text-generation
|
[
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"dataset:cwchang/tw_address_large",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-12-13T10:53:03Z |
2023-12-14T18:56:09+00:00
| 90 | 0 |
---
base_model: google/mt5-small
datasets:
- cwchang/tw_address_large
license: apache-2.0
metrics:
- rouge
tags:
- generated_from_trainer
model-index:
- name: formatted_address
results:
- task:
type: summarization
name: Summarization
dataset:
name: cwchang/tw_address_large
type: cwchang/tw_address_large
metrics:
- type: rouge
value: 99.396
name: Rouge1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# formatted_address
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the cwchang/tw_address_large dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0613
- Rouge1: 99.396
- Rouge2: 50.6957
- Rougel: 99.3961
- Rougelsum: 99.4032
- Gen Len: 13.744
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# formatted_address
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the cwchang/tw_address_large dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0613
- Rouge1: 99.396
- Rouge2: 50.6957
- Rougel: 99.3961
- Rougelsum: 99.4032
- Gen Len: 13.744
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
{"base_model": "google/mt5-small", "datasets": ["cwchang/tw_address_large"], "license": "apache-2.0", "metrics": ["rouge"], "tags": ["generated_from_trainer"], "model-index": [{"name": "formatted_address", "results": [{"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "cwchang/tw_address_large", "type": "cwchang/tw_address_large"}, "metrics": [{"type": "rouge", "value": 99.396, "name": "Rouge1"}]}]}]}
|
task
|
[
"SUMMARIZATION"
] | 42,473 |
RichardErkhov/martimfasantos_-_sft-sum-chosen-10lp-shuff-full-tiny-awq
|
RichardErkhov
| null |
[
"safetensors",
"llama",
"4-bit",
"awq",
"region:us"
] | 2024-12-15T09:45:44Z |
2024-12-15T09:46:14+00:00
| 4 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
sft-sum-chosen-10lp-shuff-full-tiny - AWQ
- Model creator: https://huggingface.co/martimfasantos/
- Original model: https://huggingface.co/martimfasantos/sft-sum-chosen-10lp-shuff-full-tiny/
Original model description:
---
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
tags:
- summarization
- generated_from_trainer
model-index:
- name: sft-sum-chosen-10lp-shuff-full-tiny
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft-sum-chosen-10lp-shuff-full-tiny
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the martimfasantos/openai-summarize-tldr dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9409
- Nll Loss: 1.9409
- Logps/best: -72.8478
- Rewards/chosen: 2.0114
- Rewards/rejected: -0.4229
- Rewards/accuracies: 0.9998
- Rewards/margins: 2.4343
- Logps/rejected: -11.6536
- Logps/chosen: -72.8478
- Logits/rejected: -2.6479
- Logits/chosen: -2.9522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Nll Loss | Logps/best | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------:|:----------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 2.3573 | 0.0137 | 100 | 2.3703 | 2.3703 | -88.8140 | 0.4147 | 0.0412 | 1.0 | 0.3735 | -7.0125 | -88.8140 | -2.6551 | -2.9658 |
| 2.1904 | 0.0274 | 200 | 2.1322 | 2.1322 | -79.9647 | 1.2997 | 0.0373 | 1.0 | 1.2624 | -7.0516 | -79.9647 | -2.6656 | -2.9758 |
| 1.9956 | 0.0411 | 300 | 2.0629 | 2.0629 | -77.3844 | 1.5577 | -0.1097 | 0.9995 | 1.6674 | -8.5217 | -77.3844 | -2.6813 | -2.9915 |
| 2.0379 | 0.0548 | 400 | 2.0405 | 2.0405 | -76.5483 | 1.6413 | -0.1759 | 0.9994 | 1.8173 | -9.1840 | -76.5483 | -2.6918 | -3.0033 |
| 1.9476 | 0.0685 | 500 | 2.0250 | 2.0250 | -75.9762 | 1.6985 | -0.1561 | 0.9991 | 1.8546 | -8.9858 | -75.9762 | -2.6981 | -3.0089 |
| 2.0151 | 0.0822 | 600 | 2.0134 | 2.0133 | -75.5465 | 1.7415 | -0.1979 | 0.9991 | 1.9394 | -9.4039 | -75.5465 | -2.6956 | -3.0066 |
| 1.9972 | 0.0960 | 700 | 2.0037 | 2.0037 | -75.1909 | 1.7770 | -0.2110 | 0.9997 | 1.9881 | -9.5345 | -75.1909 | -2.6886 | -2.9996 |
| 1.9851 | 0.1097 | 800 | 1.9950 | 1.9950 | -74.8615 | 1.8100 | -0.2127 | 0.9997 | 2.0226 | -9.5511 | -74.8615 | -2.6861 | -2.9971 |
| 2.0271 | 0.1234 | 900 | 1.9890 | 1.9890 | -74.6372 | 1.8324 | -0.2530 | 0.9995 | 2.0854 | -9.9543 | -74.6372 | -2.6818 | -2.9925 |
| 2.0501 | 0.1371 | 1000 | 1.9845 | 1.9845 | -74.4788 | 1.8483 | -0.3242 | 0.9997 | 2.1724 | -10.6661 | -74.4788 | -2.6491 | -2.9545 |
| 1.9699 | 0.1508 | 1100 | 1.9813 | 1.9812 | -74.3528 | 1.8609 | -0.3208 | 0.9997 | 2.1817 | -10.6327 | -74.3528 | -2.6664 | -2.9755 |
| 1.9448 | 0.1645 | 1200 | 1.9773 | 1.9772 | -74.2031 | 1.8758 | -0.2738 | 0.9997 | 2.1496 | -10.1623 | -74.2031 | -2.6739 | -2.9842 |
| 1.9606 | 0.1782 | 1300 | 1.9746 | 1.9746 | -74.0931 | 1.8868 | -0.3353 | 0.9997 | 2.2221 | -10.7775 | -74.0931 | -2.6755 | -2.9850 |
| 1.8795 | 0.1919 | 1400 | 1.9716 | 1.9715 | -73.9887 | 1.8973 | -0.3115 | 0.9997 | 2.2088 | -10.5398 | -73.9887 | -2.6658 | -2.9741 |
| 1.9585 | 0.2056 | 1500 | 1.9703 | 1.9703 | -73.9430 | 1.9018 | -0.3353 | 0.9997 | 2.2371 | -10.7774 | -73.9430 | -2.6721 | -2.9814 |
| 1.9508 | 0.2193 | 1600 | 1.9664 | 1.9664 | -73.7942 | 1.9167 | -0.4138 | 0.9998 | 2.3305 | -11.5624 | -73.7942 | -2.6751 | -2.9840 |
| 1.9041 | 0.2330 | 1700 | 1.9657 | 1.9656 | -73.7736 | 1.9188 | -0.3353 | 0.9997 | 2.2541 | -10.7776 | -73.7736 | -2.6703 | -2.9794 |
| 1.9507 | 0.2467 | 1800 | 1.9634 | 1.9634 | -73.6847 | 1.9277 | -0.3964 | 0.9998 | 2.3240 | -11.3880 | -73.6847 | -2.6728 | -2.9810 |
| 1.8942 | 0.2604 | 1900 | 1.9620 | 1.9620 | -73.6314 | 1.9330 | -0.3368 | 0.9998 | 2.2698 | -10.7926 | -73.6314 | -2.6631 | -2.9695 |
| 2.0088 | 0.2742 | 2000 | 1.9604 | 1.9603 | -73.5703 | 1.9391 | -0.3303 | 0.9997 | 2.2694 | -10.7277 | -73.5703 | -2.6651 | -2.9720 |
| 2.0277 | 0.2879 | 2100 | 1.9596 | 1.9596 | -73.5404 | 1.9421 | -0.3122 | 0.9997 | 2.2543 | -10.5463 | -73.5404 | -2.6687 | -2.9765 |
| 1.9697 | 0.3016 | 2200 | 1.9578 | 1.9578 | -73.4823 | 1.9479 | -0.3187 | 0.9998 | 2.2666 | -10.6117 | -73.4823 | -2.6615 | -2.9674 |
| 1.9756 | 0.3153 | 2300 | 1.9564 | 1.9564 | -73.4282 | 1.9533 | -0.3217 | 0.9997 | 2.2750 | -10.6410 | -73.4282 | -2.6624 | -2.9692 |
| 1.9471 | 0.3290 | 2400 | 1.9552 | 1.9551 | -73.3780 | 1.9583 | -0.3660 | 0.9997 | 2.3244 | -11.0849 | -73.3780 | -2.6636 | -2.9703 |
| 1.9646 | 0.3427 | 2500 | 1.9546 | 1.9546 | -73.3608 | 1.9601 | -0.3453 | 0.9997 | 2.3054 | -10.8779 | -73.3608 | -2.6522 | -2.9582 |
| 2.0034 | 0.3564 | 2600 | 1.9536 | 1.9536 | -73.3221 | 1.9639 | -0.4025 | 0.9998 | 2.3665 | -11.4498 | -73.3221 | -2.6635 | -2.9708 |
| 1.9853 | 0.3701 | 2700 | 1.9522 | 1.9522 | -73.2647 | 1.9697 | -0.3826 | 0.9998 | 2.3523 | -11.2507 | -73.2647 | -2.6548 | -2.9612 |
| 1.9648 | 0.3838 | 2800 | 1.9518 | 1.9518 | -73.2540 | 1.9707 | -0.4008 | 0.9998 | 2.3716 | -11.4329 | -73.2540 | -2.6557 | -2.9618 |
| 1.992 | 0.3975 | 2900 | 1.9514 | 1.9513 | -73.2347 | 1.9727 | -0.3741 | 0.9998 | 2.3468 | -11.1657 | -73.2347 | -2.6585 | -2.9649 |
| 1.9098 | 0.4112 | 3000 | 1.9501 | 1.9501 | -73.1879 | 1.9773 | -0.3653 | 0.9998 | 2.3426 | -11.0774 | -73.1879 | -2.6623 | -2.9691 |
| 2.0089 | 0.4249 | 3100 | 1.9496 | 1.9496 | -73.1694 | 1.9792 | -0.3960 | 0.9998 | 2.3752 | -11.3848 | -73.1694 | -2.6570 | -2.9627 |
| 2.0138 | 0.4386 | 3200 | 1.9487 | 1.9487 | -73.1364 | 1.9825 | -0.3799 | 0.9998 | 2.3624 | -11.2233 | -73.1364 | -2.6524 | -2.9576 |
| 1.9295 | 0.4524 | 3300 | 1.9489 | 1.9489 | -73.1488 | 1.9813 | -0.3977 | 0.9998 | 2.3790 | -11.4018 | -73.1488 | -2.6569 | -2.9628 |
| 1.9276 | 0.4661 | 3400 | 1.9479 | 1.9479 | -73.1079 | 1.9853 | -0.3945 | 0.9998 | 2.3799 | -11.3697 | -73.1079 | -2.6537 | -2.9590 |
| 1.9594 | 0.4798 | 3500 | 1.9472 | 1.9472 | -73.0821 | 1.9879 | -0.4255 | 0.9998 | 2.4135 | -11.6798 | -73.0821 | -2.6542 | -2.9600 |
| 1.9141 | 0.4935 | 3600 | 1.9471 | 1.9471 | -73.0800 | 1.9881 | -0.4024 | 0.9998 | 2.3906 | -11.4487 | -73.0800 | -2.6500 | -2.9555 |
| 1.8611 | 0.5072 | 3700 | 1.9460 | 1.9460 | -73.0338 | 1.9928 | -0.3865 | 0.9998 | 2.3793 | -11.2897 | -73.0338 | -2.6542 | -2.9599 |
| 1.8907 | 0.5209 | 3800 | 1.9460 | 1.9460 | -73.0372 | 1.9924 | -0.3918 | 0.9998 | 2.3843 | -11.3429 | -73.0372 | -2.6504 | -2.9556 |
| 1.9147 | 0.5346 | 3900 | 1.9456 | 1.9456 | -73.0218 | 1.9940 | -0.3939 | 0.9998 | 2.3879 | -11.3637 | -73.0218 | -2.6498 | -2.9550 |
| 1.9485 | 0.5483 | 4000 | 1.9454 | 1.9454 | -73.0146 | 1.9947 | -0.4036 | 0.9998 | 2.3983 | -11.4605 | -73.0146 | -2.6513 | -2.9565 |
| 1.9379 | 0.5620 | 4100 | 1.9448 | 1.9448 | -72.9908 | 1.9971 | -0.3932 | 0.9998 | 2.3902 | -11.3561 | -72.9908 | -2.6501 | -2.9550 |
| 1.8956 | 0.5757 | 4200 | 1.9444 | 1.9443 | -72.9738 | 1.9988 | -0.4097 | 0.9998 | 2.4084 | -11.5214 | -72.9738 | -2.6477 | -2.9518 |
| 1.9916 | 0.5894 | 4300 | 1.9440 | 1.9440 | -72.9580 | 2.0003 | -0.4049 | 0.9998 | 2.4053 | -11.4737 | -72.9580 | -2.6473 | -2.9514 |
| 1.8885 | 0.6031 | 4400 | 1.9441 | 1.9441 | -72.9673 | 1.9994 | -0.3808 | 0.9998 | 2.3802 | -11.2320 | -72.9673 | -2.6464 | -2.9503 |
| 1.9078 | 0.6169 | 4500 | 1.9437 | 1.9436 | -72.9481 | 2.0013 | -0.4206 | 0.9998 | 2.4220 | -11.6308 | -72.9481 | -2.6465 | -2.9503 |
| 1.9037 | 0.6306 | 4600 | 1.9435 | 1.9434 | -72.9426 | 2.0019 | -0.3718 | 0.9998 | 2.3737 | -11.1427 | -72.9426 | -2.6441 | -2.9481 |
| 1.9558 | 0.6443 | 4700 | 1.9427 | 1.9427 | -72.9121 | 2.0049 | -0.3758 | 0.9998 | 2.3807 | -11.1827 | -72.9121 | -2.6445 | -2.9484 |
| 1.9416 | 0.6580 | 4800 | 1.9429 | 1.9428 | -72.9187 | 2.0043 | -0.3698 | 0.9998 | 2.3741 | -11.1227 | -72.9187 | -2.6447 | -2.9486 |
| 1.9471 | 0.6717 | 4900 | 1.9427 | 1.9427 | -72.9167 | 2.0045 | -0.4041 | 0.9998 | 2.4085 | -11.4650 | -72.9167 | -2.6447 | -2.9486 |
| 1.9237 | 0.6854 | 5000 | 1.9425 | 1.9425 | -72.9062 | 2.0055 | -0.4023 | 0.9998 | 2.4079 | -11.4479 | -72.9062 | -2.6451 | -2.9490 |
| 1.9687 | 0.6991 | 5100 | 1.9422 | 1.9421 | -72.8930 | 2.0068 | -0.4106 | 0.9998 | 2.4174 | -11.5306 | -72.8930 | -2.6475 | -2.9516 |
| 1.9274 | 0.7128 | 5200 | 1.9420 | 1.9420 | -72.8846 | 2.0077 | -0.3934 | 0.9998 | 2.4011 | -11.3589 | -72.8846 | -2.6454 | -2.9492 |
| 1.8258 | 0.7265 | 5300 | 1.9418 | 1.9418 | -72.8788 | 2.0083 | -0.3905 | 0.9998 | 2.3987 | -11.3293 | -72.8788 | -2.6458 | -2.9498 |
| 1.8978 | 0.7402 | 5400 | 1.9416 | 1.9416 | -72.8710 | 2.0090 | -0.4199 | 0.9998 | 2.4289 | -11.6232 | -72.8710 | -2.6475 | -2.9515 |
| 1.9706 | 0.7539 | 5500 | 1.9416 | 1.9416 | -72.8733 | 2.0088 | -0.4296 | 0.9998 | 2.4384 | -11.7202 | -72.8733 | -2.6467 | -2.9506 |
| 1.8711 | 0.7676 | 5600 | 1.9416 | 1.9415 | -72.8708 | 2.0091 | -0.4093 | 0.9998 | 2.4183 | -11.5174 | -72.8708 | -2.6454 | -2.9492 |
| 1.925 | 0.7813 | 5700 | 1.9412 | 1.9411 | -72.8550 | 2.0106 | -0.4237 | 0.9998 | 2.4344 | -11.6619 | -72.8550 | -2.6463 | -2.9502 |
| 1.952 | 0.7951 | 5800 | 1.9412 | 1.9411 | -72.8554 | 2.0106 | -0.4179 | 0.9998 | 2.4285 | -11.6032 | -72.8554 | -2.6463 | -2.9503 |
| 1.9295 | 0.8088 | 5900 | 1.9413 | 1.9413 | -72.8621 | 2.0099 | -0.4133 | 0.9998 | 2.4233 | -11.5578 | -72.8621 | -2.6463 | -2.9503 |
| 1.9457 | 0.8225 | 6000 | 1.9413 | 1.9413 | -72.8636 | 2.0098 | -0.4083 | 0.9998 | 2.4180 | -11.5072 | -72.8636 | -2.6459 | -2.9499 |
| 1.9016 | 0.8362 | 6100 | 1.9412 | 1.9412 | -72.8592 | 2.0102 | -0.4150 | 0.9998 | 2.4252 | -11.5748 | -72.8592 | -2.6471 | -2.9513 |
| 1.9789 | 0.8499 | 6200 | 1.9413 | 1.9413 | -72.8632 | 2.0098 | -0.4221 | 0.9998 | 2.4319 | -11.6458 | -72.8632 | -2.6477 | -2.9520 |
| 1.944 | 0.8636 | 6300 | 1.9411 | 1.9411 | -72.8542 | 2.0107 | -0.4232 | 0.9998 | 2.4339 | -11.6568 | -72.8542 | -2.6475 | -2.9518 |
| 1.9435 | 0.8773 | 6400 | 1.9410 | 1.9409 | -72.8496 | 2.0112 | -0.4278 | 0.9998 | 2.4390 | -11.7027 | -72.8496 | -2.6479 | -2.9523 |
| 1.917 | 0.8910 | 6500 | 1.9410 | 1.9410 | -72.8519 | 2.0109 | -0.4237 | 0.9998 | 2.4346 | -11.6610 | -72.8519 | -2.6482 | -2.9525 |
| 1.9243 | 0.9047 | 6600 | 1.9410 | 1.9410 | -72.8520 | 2.0109 | -0.4202 | 0.9998 | 2.4311 | -11.6265 | -72.8520 | -2.6480 | -2.9523 |
| 1.8624 | 0.9184 | 6700 | 1.9409 | 1.9409 | -72.8485 | 2.0113 | -0.4202 | 0.9998 | 2.4314 | -11.6260 | -72.8485 | -2.6477 | -2.9520 |
| 1.8998 | 0.9321 | 6800 | 1.9410 | 1.9409 | -72.8489 | 2.0112 | -0.4227 | 0.9998 | 2.4340 | -11.6518 | -72.8489 | -2.6478 | -2.9521 |
| 1.9654 | 0.9458 | 6900 | 1.9410 | 1.9409 | -72.8490 | 2.0112 | -0.4228 | 0.9998 | 2.4341 | -11.6529 | -72.8490 | -2.6478 | -2.9521 |
| 1.9113 | 0.9595 | 7000 | 1.9409 | 1.9409 | -72.8471 | 2.0114 | -0.4228 | 0.9998 | 2.4342 | -11.6520 | -72.8471 | -2.6477 | -2.9520 |
| 1.951 | 0.9733 | 7100 | 1.9410 | 1.9410 | -72.8501 | 2.0111 | -0.4228 | 0.9998 | 2.4339 | -11.6524 | -72.8501 | -2.6478 | -2.9521 |
| 1.9863 | 0.9870 | 7200 | 1.9409 | 1.9409 | -72.8478 | 2.0114 | -0.4229 | 0.9998 | 2.4343 | -11.6536 | -72.8478 | -2.6479 | -2.9522 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.20.0
- Tokenizers 0.19.1
| null |
Non_BioNLP
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
sft-sum-chosen-10lp-shuff-full-tiny - AWQ
- Model creator: https://huggingface.co/martimfasantos/
- Original model: https://huggingface.co/martimfasantos/sft-sum-chosen-10lp-shuff-full-tiny/
Original model description:
---
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
tags:
- summarization
- generated_from_trainer
model-index:
- name: sft-sum-chosen-10lp-shuff-full-tiny
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft-sum-chosen-10lp-shuff-full-tiny
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the martimfasantos/openai-summarize-tldr dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9409
- Nll Loss: 1.9409
- Logps/best: -72.8478
- Rewards/chosen: 2.0114
- Rewards/rejected: -0.4229
- Rewards/accuracies: 0.9998
- Rewards/margins: 2.4343
- Logps/rejected: -11.6536
- Logps/chosen: -72.8478
- Logits/rejected: -2.6479
- Logits/chosen: -2.9522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Nll Loss | Logps/best | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------:|:----------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 2.3573 | 0.0137 | 100 | 2.3703 | 2.3703 | -88.8140 | 0.4147 | 0.0412 | 1.0 | 0.3735 | -7.0125 | -88.8140 | -2.6551 | -2.9658 |
| 2.1904 | 0.0274 | 200 | 2.1322 | 2.1322 | -79.9647 | 1.2997 | 0.0373 | 1.0 | 1.2624 | -7.0516 | -79.9647 | -2.6656 | -2.9758 |
| 1.9956 | 0.0411 | 300 | 2.0629 | 2.0629 | -77.3844 | 1.5577 | -0.1097 | 0.9995 | 1.6674 | -8.5217 | -77.3844 | -2.6813 | -2.9915 |
| 2.0379 | 0.0548 | 400 | 2.0405 | 2.0405 | -76.5483 | 1.6413 | -0.1759 | 0.9994 | 1.8173 | -9.1840 | -76.5483 | -2.6918 | -3.0033 |
| 1.9476 | 0.0685 | 500 | 2.0250 | 2.0250 | -75.9762 | 1.6985 | -0.1561 | 0.9991 | 1.8546 | -8.9858 | -75.9762 | -2.6981 | -3.0089 |
| 2.0151 | 0.0822 | 600 | 2.0134 | 2.0133 | -75.5465 | 1.7415 | -0.1979 | 0.9991 | 1.9394 | -9.4039 | -75.5465 | -2.6956 | -3.0066 |
| 1.9972 | 0.0960 | 700 | 2.0037 | 2.0037 | -75.1909 | 1.7770 | -0.2110 | 0.9997 | 1.9881 | -9.5345 | -75.1909 | -2.6886 | -2.9996 |
| 1.9851 | 0.1097 | 800 | 1.9950 | 1.9950 | -74.8615 | 1.8100 | -0.2127 | 0.9997 | 2.0226 | -9.5511 | -74.8615 | -2.6861 | -2.9971 |
| 2.0271 | 0.1234 | 900 | 1.9890 | 1.9890 | -74.6372 | 1.8324 | -0.2530 | 0.9995 | 2.0854 | -9.9543 | -74.6372 | -2.6818 | -2.9925 |
| 2.0501 | 0.1371 | 1000 | 1.9845 | 1.9845 | -74.4788 | 1.8483 | -0.3242 | 0.9997 | 2.1724 | -10.6661 | -74.4788 | -2.6491 | -2.9545 |
| 1.9699 | 0.1508 | 1100 | 1.9813 | 1.9812 | -74.3528 | 1.8609 | -0.3208 | 0.9997 | 2.1817 | -10.6327 | -74.3528 | -2.6664 | -2.9755 |
| 1.9448 | 0.1645 | 1200 | 1.9773 | 1.9772 | -74.2031 | 1.8758 | -0.2738 | 0.9997 | 2.1496 | -10.1623 | -74.2031 | -2.6739 | -2.9842 |
| 1.9606 | 0.1782 | 1300 | 1.9746 | 1.9746 | -74.0931 | 1.8868 | -0.3353 | 0.9997 | 2.2221 | -10.7775 | -74.0931 | -2.6755 | -2.9850 |
| 1.8795 | 0.1919 | 1400 | 1.9716 | 1.9715 | -73.9887 | 1.8973 | -0.3115 | 0.9997 | 2.2088 | -10.5398 | -73.9887 | -2.6658 | -2.9741 |
| 1.9585 | 0.2056 | 1500 | 1.9703 | 1.9703 | -73.9430 | 1.9018 | -0.3353 | 0.9997 | 2.2371 | -10.7774 | -73.9430 | -2.6721 | -2.9814 |
| 1.9508 | 0.2193 | 1600 | 1.9664 | 1.9664 | -73.7942 | 1.9167 | -0.4138 | 0.9998 | 2.3305 | -11.5624 | -73.7942 | -2.6751 | -2.9840 |
| 1.9041 | 0.2330 | 1700 | 1.9657 | 1.9656 | -73.7736 | 1.9188 | -0.3353 | 0.9997 | 2.2541 | -10.7776 | -73.7736 | -2.6703 | -2.9794 |
| 1.9507 | 0.2467 | 1800 | 1.9634 | 1.9634 | -73.6847 | 1.9277 | -0.3964 | 0.9998 | 2.3240 | -11.3880 | -73.6847 | -2.6728 | -2.9810 |
| 1.8942 | 0.2604 | 1900 | 1.9620 | 1.9620 | -73.6314 | 1.9330 | -0.3368 | 0.9998 | 2.2698 | -10.7926 | -73.6314 | -2.6631 | -2.9695 |
| 2.0088 | 0.2742 | 2000 | 1.9604 | 1.9603 | -73.5703 | 1.9391 | -0.3303 | 0.9997 | 2.2694 | -10.7277 | -73.5703 | -2.6651 | -2.9720 |
| 2.0277 | 0.2879 | 2100 | 1.9596 | 1.9596 | -73.5404 | 1.9421 | -0.3122 | 0.9997 | 2.2543 | -10.5463 | -73.5404 | -2.6687 | -2.9765 |
| 1.9697 | 0.3016 | 2200 | 1.9578 | 1.9578 | -73.4823 | 1.9479 | -0.3187 | 0.9998 | 2.2666 | -10.6117 | -73.4823 | -2.6615 | -2.9674 |
| 1.9756 | 0.3153 | 2300 | 1.9564 | 1.9564 | -73.4282 | 1.9533 | -0.3217 | 0.9997 | 2.2750 | -10.6410 | -73.4282 | -2.6624 | -2.9692 |
| 1.9471 | 0.3290 | 2400 | 1.9552 | 1.9551 | -73.3780 | 1.9583 | -0.3660 | 0.9997 | 2.3244 | -11.0849 | -73.3780 | -2.6636 | -2.9703 |
| 1.9646 | 0.3427 | 2500 | 1.9546 | 1.9546 | -73.3608 | 1.9601 | -0.3453 | 0.9997 | 2.3054 | -10.8779 | -73.3608 | -2.6522 | -2.9582 |
| 2.0034 | 0.3564 | 2600 | 1.9536 | 1.9536 | -73.3221 | 1.9639 | -0.4025 | 0.9998 | 2.3665 | -11.4498 | -73.3221 | -2.6635 | -2.9708 |
| 1.9853 | 0.3701 | 2700 | 1.9522 | 1.9522 | -73.2647 | 1.9697 | -0.3826 | 0.9998 | 2.3523 | -11.2507 | -73.2647 | -2.6548 | -2.9612 |
| 1.9648 | 0.3838 | 2800 | 1.9518 | 1.9518 | -73.2540 | 1.9707 | -0.4008 | 0.9998 | 2.3716 | -11.4329 | -73.2540 | -2.6557 | -2.9618 |
| 1.992 | 0.3975 | 2900 | 1.9514 | 1.9513 | -73.2347 | 1.9727 | -0.3741 | 0.9998 | 2.3468 | -11.1657 | -73.2347 | -2.6585 | -2.9649 |
| 1.9098 | 0.4112 | 3000 | 1.9501 | 1.9501 | -73.1879 | 1.9773 | -0.3653 | 0.9998 | 2.3426 | -11.0774 | -73.1879 | -2.6623 | -2.9691 |
| 2.0089 | 0.4249 | 3100 | 1.9496 | 1.9496 | -73.1694 | 1.9792 | -0.3960 | 0.9998 | 2.3752 | -11.3848 | -73.1694 | -2.6570 | -2.9627 |
| 2.0138 | 0.4386 | 3200 | 1.9487 | 1.9487 | -73.1364 | 1.9825 | -0.3799 | 0.9998 | 2.3624 | -11.2233 | -73.1364 | -2.6524 | -2.9576 |
| 1.9295 | 0.4524 | 3300 | 1.9489 | 1.9489 | -73.1488 | 1.9813 | -0.3977 | 0.9998 | 2.3790 | -11.4018 | -73.1488 | -2.6569 | -2.9628 |
| 1.9276 | 0.4661 | 3400 | 1.9479 | 1.9479 | -73.1079 | 1.9853 | -0.3945 | 0.9998 | 2.3799 | -11.3697 | -73.1079 | -2.6537 | -2.9590 |
| 1.9594 | 0.4798 | 3500 | 1.9472 | 1.9472 | -73.0821 | 1.9879 | -0.4255 | 0.9998 | 2.4135 | -11.6798 | -73.0821 | -2.6542 | -2.9600 |
| 1.9141 | 0.4935 | 3600 | 1.9471 | 1.9471 | -73.0800 | 1.9881 | -0.4024 | 0.9998 | 2.3906 | -11.4487 | -73.0800 | -2.6500 | -2.9555 |
| 1.8611 | 0.5072 | 3700 | 1.9460 | 1.9460 | -73.0338 | 1.9928 | -0.3865 | 0.9998 | 2.3793 | -11.2897 | -73.0338 | -2.6542 | -2.9599 |
| 1.8907 | 0.5209 | 3800 | 1.9460 | 1.9460 | -73.0372 | 1.9924 | -0.3918 | 0.9998 | 2.3843 | -11.3429 | -73.0372 | -2.6504 | -2.9556 |
| 1.9147 | 0.5346 | 3900 | 1.9456 | 1.9456 | -73.0218 | 1.9940 | -0.3939 | 0.9998 | 2.3879 | -11.3637 | -73.0218 | -2.6498 | -2.9550 |
| 1.9485 | 0.5483 | 4000 | 1.9454 | 1.9454 | -73.0146 | 1.9947 | -0.4036 | 0.9998 | 2.3983 | -11.4605 | -73.0146 | -2.6513 | -2.9565 |
| 1.9379 | 0.5620 | 4100 | 1.9448 | 1.9448 | -72.9908 | 1.9971 | -0.3932 | 0.9998 | 2.3902 | -11.3561 | -72.9908 | -2.6501 | -2.9550 |
| 1.8956 | 0.5757 | 4200 | 1.9444 | 1.9443 | -72.9738 | 1.9988 | -0.4097 | 0.9998 | 2.4084 | -11.5214 | -72.9738 | -2.6477 | -2.9518 |
| 1.9916 | 0.5894 | 4300 | 1.9440 | 1.9440 | -72.9580 | 2.0003 | -0.4049 | 0.9998 | 2.4053 | -11.4737 | -72.9580 | -2.6473 | -2.9514 |
| 1.8885 | 0.6031 | 4400 | 1.9441 | 1.9441 | -72.9673 | 1.9994 | -0.3808 | 0.9998 | 2.3802 | -11.2320 | -72.9673 | -2.6464 | -2.9503 |
| 1.9078 | 0.6169 | 4500 | 1.9437 | 1.9436 | -72.9481 | 2.0013 | -0.4206 | 0.9998 | 2.4220 | -11.6308 | -72.9481 | -2.6465 | -2.9503 |
| 1.9037 | 0.6306 | 4600 | 1.9435 | 1.9434 | -72.9426 | 2.0019 | -0.3718 | 0.9998 | 2.3737 | -11.1427 | -72.9426 | -2.6441 | -2.9481 |
| 1.9558 | 0.6443 | 4700 | 1.9427 | 1.9427 | -72.9121 | 2.0049 | -0.3758 | 0.9998 | 2.3807 | -11.1827 | -72.9121 | -2.6445 | -2.9484 |
| 1.9416 | 0.6580 | 4800 | 1.9429 | 1.9428 | -72.9187 | 2.0043 | -0.3698 | 0.9998 | 2.3741 | -11.1227 | -72.9187 | -2.6447 | -2.9486 |
| 1.9471 | 0.6717 | 4900 | 1.9427 | 1.9427 | -72.9167 | 2.0045 | -0.4041 | 0.9998 | 2.4085 | -11.4650 | -72.9167 | -2.6447 | -2.9486 |
| 1.9237 | 0.6854 | 5000 | 1.9425 | 1.9425 | -72.9062 | 2.0055 | -0.4023 | 0.9998 | 2.4079 | -11.4479 | -72.9062 | -2.6451 | -2.9490 |
| 1.9687 | 0.6991 | 5100 | 1.9422 | 1.9421 | -72.8930 | 2.0068 | -0.4106 | 0.9998 | 2.4174 | -11.5306 | -72.8930 | -2.6475 | -2.9516 |
| 1.9274 | 0.7128 | 5200 | 1.9420 | 1.9420 | -72.8846 | 2.0077 | -0.3934 | 0.9998 | 2.4011 | -11.3589 | -72.8846 | -2.6454 | -2.9492 |
| 1.8258 | 0.7265 | 5300 | 1.9418 | 1.9418 | -72.8788 | 2.0083 | -0.3905 | 0.9998 | 2.3987 | -11.3293 | -72.8788 | -2.6458 | -2.9498 |
| 1.8978 | 0.7402 | 5400 | 1.9416 | 1.9416 | -72.8710 | 2.0090 | -0.4199 | 0.9998 | 2.4289 | -11.6232 | -72.8710 | -2.6475 | -2.9515 |
| 1.9706 | 0.7539 | 5500 | 1.9416 | 1.9416 | -72.8733 | 2.0088 | -0.4296 | 0.9998 | 2.4384 | -11.7202 | -72.8733 | -2.6467 | -2.9506 |
| 1.8711 | 0.7676 | 5600 | 1.9416 | 1.9415 | -72.8708 | 2.0091 | -0.4093 | 0.9998 | 2.4183 | -11.5174 | -72.8708 | -2.6454 | -2.9492 |
| 1.925 | 0.7813 | 5700 | 1.9412 | 1.9411 | -72.8550 | 2.0106 | -0.4237 | 0.9998 | 2.4344 | -11.6619 | -72.8550 | -2.6463 | -2.9502 |
| 1.952 | 0.7951 | 5800 | 1.9412 | 1.9411 | -72.8554 | 2.0106 | -0.4179 | 0.9998 | 2.4285 | -11.6032 | -72.8554 | -2.6463 | -2.9503 |
| 1.9295 | 0.8088 | 5900 | 1.9413 | 1.9413 | -72.8621 | 2.0099 | -0.4133 | 0.9998 | 2.4233 | -11.5578 | -72.8621 | -2.6463 | -2.9503 |
| 1.9457 | 0.8225 | 6000 | 1.9413 | 1.9413 | -72.8636 | 2.0098 | -0.4083 | 0.9998 | 2.4180 | -11.5072 | -72.8636 | -2.6459 | -2.9499 |
| 1.9016 | 0.8362 | 6100 | 1.9412 | 1.9412 | -72.8592 | 2.0102 | -0.4150 | 0.9998 | 2.4252 | -11.5748 | -72.8592 | -2.6471 | -2.9513 |
| 1.9789 | 0.8499 | 6200 | 1.9413 | 1.9413 | -72.8632 | 2.0098 | -0.4221 | 0.9998 | 2.4319 | -11.6458 | -72.8632 | -2.6477 | -2.9520 |
| 1.944 | 0.8636 | 6300 | 1.9411 | 1.9411 | -72.8542 | 2.0107 | -0.4232 | 0.9998 | 2.4339 | -11.6568 | -72.8542 | -2.6475 | -2.9518 |
| 1.9435 | 0.8773 | 6400 | 1.9410 | 1.9409 | -72.8496 | 2.0112 | -0.4278 | 0.9998 | 2.4390 | -11.7027 | -72.8496 | -2.6479 | -2.9523 |
| 1.917 | 0.8910 | 6500 | 1.9410 | 1.9410 | -72.8519 | 2.0109 | -0.4237 | 0.9998 | 2.4346 | -11.6610 | -72.8519 | -2.6482 | -2.9525 |
| 1.9243 | 0.9047 | 6600 | 1.9410 | 1.9410 | -72.8520 | 2.0109 | -0.4202 | 0.9998 | 2.4311 | -11.6265 | -72.8520 | -2.6480 | -2.9523 |
| 1.8624 | 0.9184 | 6700 | 1.9409 | 1.9409 | -72.8485 | 2.0113 | -0.4202 | 0.9998 | 2.4314 | -11.6260 | -72.8485 | -2.6477 | -2.9520 |
| 1.8998 | 0.9321 | 6800 | 1.9410 | 1.9409 | -72.8489 | 2.0112 | -0.4227 | 0.9998 | 2.4340 | -11.6518 | -72.8489 | -2.6478 | -2.9521 |
| 1.9654 | 0.9458 | 6900 | 1.9410 | 1.9409 | -72.8490 | 2.0112 | -0.4228 | 0.9998 | 2.4341 | -11.6529 | -72.8490 | -2.6478 | -2.9521 |
| 1.9113 | 0.9595 | 7000 | 1.9409 | 1.9409 | -72.8471 | 2.0114 | -0.4228 | 0.9998 | 2.4342 | -11.6520 | -72.8471 | -2.6477 | -2.9520 |
| 1.951 | 0.9733 | 7100 | 1.9410 | 1.9410 | -72.8501 | 2.0111 | -0.4228 | 0.9998 | 2.4339 | -11.6524 | -72.8501 | -2.6478 | -2.9521 |
| 1.9863 | 0.9870 | 7200 | 1.9409 | 1.9409 | -72.8478 | 2.0114 | -0.4229 | 0.9998 | 2.4343 | -11.6536 | -72.8478 | -2.6479 | -2.9522 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.20.0
- Tokenizers 0.19.1
|
{}
|
task
|
[
"SUMMARIZATION"
] | 42,474 |
profelyndoncarlson/text_classification_beto_tf_9_24_2024
|
profelyndoncarlson
| null |
[
"tf",
"bert",
"es",
"base_model:dccuchile/bert-base-spanish-wwm-cased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-cased",
"license:apache-2.0",
"region:us"
] | 2024-09-24T19:13:24Z |
2025-01-21T15:54:15+00:00
| 16 | 0 |
---
base_model:
- dccuchile/bert-base-spanish-wwm-cased
language:
- es
license: apache-2.0
---
# **Model Card: Spanish Binary Text Classifier using BETO**
This model is a **binary text classifier** fine-tuned from [**BETO**](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) (a BERT-based model pre-trained on Spanish). It is designed to **predict whether a given text prompt requires a search query**, enabling applications such as intelligent search systems, content recommendation, and automated query handling in Spanish-language environments.
---
## **Model Details**
- **Model Name**: `text_classification_beto_tf_9_24_2024`
- **Architecture**: [BETO (BERT-base Spanish WWM)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased)
- **Language**: Spanish
- **Task**: Binary Text Classification
- **Objective**: Given a **prompt**, the model predicts a binary label indicating whether the prompt **requires a search query** (`1 = requires search`, `0 = no search`).
---
## **Intended Use and Applications**
1. **Intelligent Search Systems**: Enhance search engines by determining when a user prompt necessitates a search query, improving search relevance and user experience.
2. **Content Recommendation**: Automatically categorize content requests to provide appropriate recommendations or resources.
3. **Automated Query Handling**: Streamline customer support or chatbot systems by identifying when additional information retrieval is needed.
4. **Information Filtering**: Sort or prioritize user inputs based on the necessity of executing a search, optimizing backend processing.
5. **Educational Tools**: Assist in language learning applications by categorizing prompts for tailored responses or resources.
---
## **How It Was Trained**
### **1. Data Source**
- **Dataset**: The model was trained on a dataset sourced from an internal SQL Server database containing:
- **Prompts** (`input_text`): Text inputs requiring classification.
- **Requires Search** (`requires_search`): Binary labels indicating whether the prompt necessitates a search (`1`) or not (`0`).
- **Data Selection**: The top **5,000** `(prompt, requires_search)` pairs were selected where both `prompt` and `requires_search` fields are non-empty, ensuring high-quality and relevant training data.
### **2. Preprocessing**
- **Data Loading**: Utilized `pandas` to load data from a CSV file containing the necessary columns.
- **Data Splitting**:
- **Training Set**: 80% of the data.
- **Validation Set**: 20% of the remaining data.
- **Test Set**: 1% of the training data (resulting in approximately 10% of the total data for testing).
- **Tokenization**:
- Employed **BETO Tokenizer** (`BertTokenizer`) suitable for Spanish text.
- Configured with:
- `truncation=True`: Truncate sequences longer than the maximum length.
- `padding=True`: Pad shorter sequences to the maximum length.
- `max_length=512`: Set maximum token length to 512 tokens.
### **3. Training Setup**
- **Base Model**: `dccuchile/bert-base-spanish-wwm-cased`
- **Framework**: TensorFlow with Keras API.
- **Model Architecture**: `TFBertForSequenceClassification` adapted for binary classification (`num_labels=2`).
- **Loss Function**: `SparseCategoricalCrossentropy` with `from_logits=True` to handle integer labels directly.
- **Optimizer**: Adam optimizer with a learning rate of `5e-5` and weight decay of `0.01`.
- **Metrics**: `SparseCategoricalAccuracy` to monitor classification accuracy during training.
- **Training Parameters**:
- **Epochs**: 4
- **Batch Size**: 16
- **Early Stopping**: Implemented via Keras callbacks to prevent overfitting by monitoring validation loss.
### **4. Data Splits**
- **Training Set**: 80%
- **Validation Set**: 19%
- **Test Set**: 1%
This split ensures that the model has ample data for learning while retaining sufficient data for unbiased evaluation.
---
## **Model Performance**
- **Training Metrics**:
- **Loss**: Monitored using `SparseCategoricalCrossentropy` on both training and validation sets.
- **Accuracy**: Tracked to evaluate the proportion of correct predictions.
- **Final Evaluation**:
- **Test Set Performance**: The model's performance on the test set is logged as `Test Loss` and `Test Sparse Categorical Accuracy`.
- **Performance Notes**: Specific numerical results (e.g., exact loss and accuracy values) depend on the data distribution and training conditions. Users are encouraged to evaluate the model on their own datasets to assess performance in their specific contexts.
---
## **Usage Example**
Below is a Python example demonstrating how to use the fine-tuned BETO model for binary text classification in Spanish. Ensure you have installed the necessary libraries (`transformers`, `tensorflow`, `pandas`, etc.) and have the model saved in the specified `output_dir`.
```python
import tensorflow as tf
from transformers import BertTokenizer, TFBertForSequenceClassification
# Load the trained model and tokenizer
model_dir = "./text_classification_beto_tf_9_24_2024"
tokenizer = BertTokenizer.from_pretrained(model_dir)
model = TFBertForSequenceClassification.from_pretrained(model_dir)
# Prepare the input
prompt = "¿Cómo puedo mejorar la eficiencia energética en mi hogar?"
# Tokenize the input
inputs = tokenizer(
prompt,
return_tensors="tf",
max_length=512,
truncation=True,
padding=True
)
# Perform prediction
outputs = model(inputs)
logits = outputs.logits
predicted_class = tf.argmax(logits, axis=1).numpy()[0]
# Interpret the result
if predicted_class == 1:
print("Requiere búsqueda: Sí")
else:
print("Requiere búsqueda: No")
```
**Output:**
```
Requiere búsqueda: Sí
```
This script loads the fine-tuned model and tokenizer, tokenizes a sample prompt, performs a prediction, and interprets the result by indicating whether the prompt requires a search query.
---
## **Limitations and Ethical Considerations**
1. **Bias and Fairness**:
- The model's predictions are influenced by the training data. If the dataset contains biases (e.g., overrepresentation of certain topics), the model may inadvertently reflect those biases. Users should ensure the training data is balanced and representative of diverse prompts.
2. **Data Privacy**:
- Ensure that the data used for training does not contain sensitive or personal information unless appropriate consent has been obtained. Compliance with data protection regulations (e.g., GDPR) is essential.
3. **Domain Specificity**:
- The model was trained on specific prompts and may perform optimally within similar contexts. Its performance may degrade when applied to highly specialized or unfamiliar domains.
4. **Misclassification Risks**:
- Incorrect predictions (false positives or false negatives) can impact user experience. Implement additional checks or human-in-the-loop systems for critical applications.
5. **Responsible Usage**:
- Prevent misuse by ensuring the model is employed ethically, avoiding applications that could harm individuals or groups. Regularly monitor and evaluate the model's outputs to maintain ethical standards.
---
## **Intended Users**
- **Developers** building intelligent search and recommendation systems in Spanish.
- **Content Managers** seeking to automate the categorization of user prompts for content delivery.
- **Researchers** exploring text classification and natural language processing tasks in Spanish.
- **Businesses** integrating automated query handling or customer support systems.
- **Educational Institutions** developing tools for language learning and information retrieval.
---
| null |
Non_BioNLP
|
# **Model Card: Spanish Binary Text Classifier using BETO**
This model is a **binary text classifier** fine-tuned from [**BETO**](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) (a BERT-based model pre-trained on Spanish). It is designed to **predict whether a given text prompt requires a search query**, enabling applications such as intelligent search systems, content recommendation, and automated query handling in Spanish-language environments.
---
## **Model Details**
- **Model Name**: `text_classification_beto_tf_9_24_2024`
- **Architecture**: [BETO (BERT-base Spanish WWM)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased)
- **Language**: Spanish
- **Task**: Binary Text Classification
- **Objective**: Given a **prompt**, the model predicts a binary label indicating whether the prompt **requires a search query** (`1 = requires search`, `0 = no search`).
---
## **Intended Use and Applications**
1. **Intelligent Search Systems**: Enhance search engines by determining when a user prompt necessitates a search query, improving search relevance and user experience.
2. **Content Recommendation**: Automatically categorize content requests to provide appropriate recommendations or resources.
3. **Automated Query Handling**: Streamline customer support or chatbot systems by identifying when additional information retrieval is needed.
4. **Information Filtering**: Sort or prioritize user inputs based on the necessity of executing a search, optimizing backend processing.
5. **Educational Tools**: Assist in language learning applications by categorizing prompts for tailored responses or resources.
---
## **How It Was Trained**
### **1. Data Source**
- **Dataset**: The model was trained on a dataset sourced from an internal SQL Server database containing:
- **Prompts** (`input_text`): Text inputs requiring classification.
- **Requires Search** (`requires_search`): Binary labels indicating whether the prompt necessitates a search (`1`) or not (`0`).
- **Data Selection**: The top **5,000** `(prompt, requires_search)` pairs were selected where both `prompt` and `requires_search` fields are non-empty, ensuring high-quality and relevant training data.
### **2. Preprocessing**
- **Data Loading**: Utilized `pandas` to load data from a CSV file containing the necessary columns.
- **Data Splitting**:
- **Training Set**: 80% of the data.
- **Validation Set**: 20% of the remaining data.
- **Test Set**: 1% of the training data (resulting in approximately 10% of the total data for testing).
- **Tokenization**:
- Employed **BETO Tokenizer** (`BertTokenizer`) suitable for Spanish text.
- Configured with:
- `truncation=True`: Truncate sequences longer than the maximum length.
- `padding=True`: Pad shorter sequences to the maximum length.
- `max_length=512`: Set maximum token length to 512 tokens.
### **3. Training Setup**
- **Base Model**: `dccuchile/bert-base-spanish-wwm-cased`
- **Framework**: TensorFlow with Keras API.
- **Model Architecture**: `TFBertForSequenceClassification` adapted for binary classification (`num_labels=2`).
- **Loss Function**: `SparseCategoricalCrossentropy` with `from_logits=True` to handle integer labels directly.
- **Optimizer**: Adam optimizer with a learning rate of `5e-5` and weight decay of `0.01`.
- **Metrics**: `SparseCategoricalAccuracy` to monitor classification accuracy during training.
- **Training Parameters**:
- **Epochs**: 4
- **Batch Size**: 16
- **Early Stopping**: Implemented via Keras callbacks to prevent overfitting by monitoring validation loss.
### **4. Data Splits**
- **Training Set**: 80%
- **Validation Set**: 19%
- **Test Set**: 1%
This split ensures that the model has ample data for learning while retaining sufficient data for unbiased evaluation.
---
## **Model Performance**
- **Training Metrics**:
- **Loss**: Monitored using `SparseCategoricalCrossentropy` on both training and validation sets.
- **Accuracy**: Tracked to evaluate the proportion of correct predictions.
- **Final Evaluation**:
- **Test Set Performance**: The model's performance on the test set is logged as `Test Loss` and `Test Sparse Categorical Accuracy`.
- **Performance Notes**: Specific numerical results (e.g., exact loss and accuracy values) depend on the data distribution and training conditions. Users are encouraged to evaluate the model on their own datasets to assess performance in their specific contexts.
---
## **Usage Example**
Below is a Python example demonstrating how to use the fine-tuned BETO model for binary text classification in Spanish. Ensure you have installed the necessary libraries (`transformers`, `tensorflow`, `pandas`, etc.) and have the model saved in the specified `output_dir`.
```python
import tensorflow as tf
from transformers import BertTokenizer, TFBertForSequenceClassification
# Load the trained model and tokenizer
model_dir = "./text_classification_beto_tf_9_24_2024"
tokenizer = BertTokenizer.from_pretrained(model_dir)
model = TFBertForSequenceClassification.from_pretrained(model_dir)
# Prepare the input
prompt = "¿Cómo puedo mejorar la eficiencia energética en mi hogar?"
# Tokenize the input
inputs = tokenizer(
prompt,
return_tensors="tf",
max_length=512,
truncation=True,
padding=True
)
# Perform prediction
outputs = model(inputs)
logits = outputs.logits
predicted_class = tf.argmax(logits, axis=1).numpy()[0]
# Interpret the result
if predicted_class == 1:
print("Requiere búsqueda: Sí")
else:
print("Requiere búsqueda: No")
```
**Output:**
```
Requiere búsqueda: Sí
```
This script loads the fine-tuned model and tokenizer, tokenizes a sample prompt, performs a prediction, and interprets the result by indicating whether the prompt requires a search query.
---
## **Limitations and Ethical Considerations**
1. **Bias and Fairness**:
- The model's predictions are influenced by the training data. If the dataset contains biases (e.g., overrepresentation of certain topics), the model may inadvertently reflect those biases. Users should ensure the training data is balanced and representative of diverse prompts.
2. **Data Privacy**:
- Ensure that the data used for training does not contain sensitive or personal information unless appropriate consent has been obtained. Compliance with data protection regulations (e.g., GDPR) is essential.
3. **Domain Specificity**:
- The model was trained on specific prompts and may perform optimally within similar contexts. Its performance may degrade when applied to highly specialized or unfamiliar domains.
4. **Misclassification Risks**:
- Incorrect predictions (false positives or false negatives) can impact user experience. Implement additional checks or human-in-the-loop systems for critical applications.
5. **Responsible Usage**:
- Prevent misuse by ensuring the model is employed ethically, avoiding applications that could harm individuals or groups. Regularly monitor and evaluate the model's outputs to maintain ethical standards.
---
## **Intended Users**
- **Developers** building intelligent search and recommendation systems in Spanish.
- **Content Managers** seeking to automate the categorization of user prompts for content delivery.
- **Researchers** exploring text classification and natural language processing tasks in Spanish.
- **Businesses** integrating automated query handling or customer support systems.
- **Educational Institutions** developing tools for language learning and information retrieval.
---
|
{"base_model": ["dccuchile/bert-base-spanish-wwm-cased"], "language": ["es"], "license": "apache-2.0"}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,475 |
ttellner/distilbert-base-uncased-finetuned-emotion
|
ttellner
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-01-21T04:00:19Z |
2024-01-21T05:31:50+00:00
| 3 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.92
name: Accuracy
- type: f1
value: 0.9195426549881595
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2160
- Accuracy: 0.92
- F1: 0.9195
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8294 | 1.0 | 250 | 0.3168 | 0.905 | 0.9035 |
| 0.251 | 2.0 | 500 | 0.2160 | 0.92 | 0.9195 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cpu
- Datasets 2.16.1
- Tokenizers 0.15.0
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2160
- Accuracy: 0.92
- F1: 0.9195
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8294 | 1.0 | 250 | 0.3168 | 0.905 | 0.9035 |
| 0.251 | 2.0 | 500 | 0.2160 | 0.92 | 0.9195 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cpu
- Datasets 2.16.1
- Tokenizers 0.15.0
|
{"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.92, "name": "Accuracy"}, {"type": "f1", "value": 0.9195426549881595, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,476 |
Laksitha/autotrain-enhanced-tosdr-summariser-1339851272
|
Laksitha
|
summarization
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain",
"summarization",
"unk",
"dataset:Laksitha/autotrain-data-enhanced-tosdr-summariser",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-08-30T16:38:01Z |
2022-08-30T16:40:01+00:00
| 23 | 0 |
---
datasets:
- Laksitha/autotrain-data-enhanced-tosdr-summariser
language:
- unk
tags:
- autotrain
- summarization
widget:
- text: I love AutoTrain 🤗
co2_eq_emissions:
emissions: 0.011960118277424782
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1339851272
- CO2 Emissions (in grams): 0.0120
## Validation Metrics
- Loss: 2.416
- Rouge1: 34.945
- Rouge2: 12.533
- RougeL: 19.876
- RougeLsum: 31.821
- Gen Len: 92.917
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/Laksitha/autotrain-enhanced-tosdr-summariser-1339851272
```
| null |
Non_BioNLP
|
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1339851272
- CO2 Emissions (in grams): 0.0120
## Validation Metrics
- Loss: 2.416
- Rouge1: 34.945
- Rouge2: 12.533
- RougeL: 19.876
- RougeLsum: 31.821
- Gen Len: 92.917
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/Laksitha/autotrain-enhanced-tosdr-summariser-1339851272
```
|
{"datasets": ["Laksitha/autotrain-data-enhanced-tosdr-summariser"], "language": ["unk"], "tags": ["autotrain", "summarization"], "widget": [{"text": "I love AutoTrain 🤗"}], "co2_eq_emissions": {"emissions": 0.011960118277424782}}
|
task
|
[
"SUMMARIZATION"
] | 42,477 |
andyP/sf-it-aug-01
|
andyP
|
text-classification
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 2023-04-24T11:27:43Z |
2023-04-24T11:28:05+00:00
| 11 | 0 |
---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# andyP/sf-it-aug-01
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("andyP/sf-it-aug-01")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| null |
Non_BioNLP
|
# andyP/sf-it-aug-01
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("andyP/sf-it-aug-01")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,478 |
unity/sentis-blaze-hand
|
unity
|
object-detection
|
[
"unity-sentis",
"onnx",
"object-detection",
"license:apache-2.0",
"region:us"
] | 2024-02-13T23:26:59Z |
2024-09-16T14:55:17+00:00
| 34 | 3 |
---
library_name: unity-sentis
license: apache-2.0
pipeline_tag: object-detection
---
# BlazeHand in Sentis
BlazeHand is a fast, light-weight hand detector from Google Research. Pretrained models are available as part of Google's [MediaPipe](https://ai.google.dev/edge/mediapipe/solutions/vision/hand_landmarker) framework.

The BlazeHand models have been converted from TFLite to ONNX for use in Sentis using [tf2onnx](https://github.com/onnx/tensorflow-onnx) with the default export parameters.
## Functional API
The BlazeHand detector model takes a (1, 192, 192, 3) input image tensor and outputs a (1, 2016, 18) boxes tensor and a (1, 2016, 1) scores tensor.
Each of the 2016 boxes consists of:
- [x position, y position, width, height] for the palm bounding box. The position is relative to the anchor position for the given index, these are precalculated and loaded from a csv file.
- [x position, y position] for each of 7 palm keypoints relative to the anchor position.
We adapt the model using the Sentis functional API to apply arg max to filter the box with the highest score.
```
var detectionScores = ScoreFiltering(rawScores, 100f); // (1, 2254, 1)
var bestScoreIndex = Functional.ArgMax(rawScores, 1).Squeeze();
var selectedBoxes = Functional.IndexSelect(rawBoxes, 1, bestScoreIndex).Unsqueeze(0); // (1, 1, 16)
var selectedScores = Functional.IndexSelect(detectionScores, 1, bestScoreIndex).Unsqueeze(0); // (1, 1, 1)
```
The BlazeHand landmarker model takes a (1, 224, 224, 3) input image tensor cropped to the detected hand and outputs a (1, 63) tensor consisting of the x, y, and z coordinates for each of 21 hand keypoints. We use this model without adaptation.
## Model inference
We use the dimensions of the texture to set up an affine transformation matrix to go from the 192x192 tensor coordinates to the image coordinates. We then fill the input tensor using a compute shader with this affine transformation, points outside the image will correspond to zeros in the input tensor.
```
var size = Mathf.Max(texture.width, texture.height);
// The affine transformation matrix to go from tensor coordinates to image coordinates
var scale = size / (float)detectorInputSize;
var M = BlazeUtils.mul(BlazeUtils.TranslationMatrix(0.5f * (new Vector2(texture.width, texture.height) + new Vector2(-size, size))), BlazeUtils.ScaleMatrix(new Vector2(scale, -scale)));
BlazeUtils.SampleImageAffine(texture, m_DetectorInput, M);
m_HandDetectorWorker.Schedule(m_DetectorInput);
```
Execution is scheduled using an [Awaitable](https://docs.unity3d.com/6000.0/Documentation/ScriptReference/Awaitable.html) and the output tensors are downloaded and awaited. This frees up the main thread while the GPU computation and download takes place.
```
var outputIdxAwaitable = (m_HandDetectorWorker.PeekOutput(0) as Tensor<int>).ReadbackAndCloneAsync();
var outputScoreAwaitable = (m_HandDetectorWorker.PeekOutput(1) as Tensor<float>).ReadbackAndCloneAsync();
var outputBoxAwaitable = (m_HandDetectorWorker.PeekOutput(2) as Tensor<float>).ReadbackAndCloneAsync();
using var outputIdx = await outputIdxAwaitable;
using var outputScore = await outputScoreAwaitable;
using var outputBox = await outputBoxAwaitable;
```
The output tensors of the detector model are now on the CPU and can be read. If the score passes our score threshold, we use the box position and keypoint positions to set up a second affine transformation. This is calculated so that the hand will be centred, rotated and scaled to fill the landmarker input tensor.

```
var boxCentre_TensorSpace = anchorPosition + new float2(outputBox[0, 0, 0], outputBox[0, 0, 1]);
var boxSize_TensorSpace = math.max(outputBox[0, 0, 2], outputBox[0, 0, 3]);
var kp0_TensorSpace = anchorPosition + new float2(outputBox[0, 0, 4 + 2 * 0 + 0], outputBox[0, 0, 4 + 2 * 0 + 1]);
var kp2_TensorSpace = anchorPosition + new float2(outputBox[0, 0, 4 + 2 * 2 + 0], outputBox[0, 0, 4 + 2 * 2 + 1]);
var delta_TensorSpace = kp2_TensorSpace - kp0_TensorSpace;
var up_TensorSpace = delta_TensorSpace / math.length(delta_TensorSpace);
var theta = math.atan2(delta_TensorSpace.y, delta_TensorSpace.x);
var rotation = 0.5f * Mathf.PI - theta;
boxCentre_TensorSpace += 0.5f * boxSize_TensorSpace * up_TensorSpace;
boxSize_TensorSpace *= 2.6f;
var origin2 = new float2(0.5f * landmarkerInputSize, 0.5f * landmarkerInputSize);
var scale2 = boxSize_TensorSpace / landmarkerInputSize;
var M2 = BlazeUtils.mul(M, BlazeUtils.mul(BlazeUtils.mul(BlazeUtils.mul(BlazeUtils.TranslationMatrix(boxCentre_TensorSpace), BlazeUtils.ScaleMatrix(new float2(scale2, -scale2))), BlazeUtils.RotationMatrix(rotation)), BlazeUtils.TranslationMatrix(-origin2)));
BlazeUtils.SampleImageAffine(texture, m_LandmarkerInput, M2);
m_HandLandmarkerWorker.Schedule(m_LandmarkerInput);
```
The output tensor of the landmarker model is asynchronously downloaded and once the values are on the CPU we use them together with the affine transformation matrix to set the transforms on the keypoints for visualization.
## WebGPU
Unity 6 supports access to the WebGPU backend in early access. Sentis has full support for running models on the web using the WebGPU backend. Discover how to gain early access and test WebGPU in our [graphics forum](https://discussions.unity.com/t/early-access-to-the-new-webgpu-backend-in-unity-2023-3/933493).

| null |
Non_BioNLP
|
# BlazeHand in Sentis
BlazeHand is a fast, light-weight hand detector from Google Research. Pretrained models are available as part of Google's [MediaPipe](https://ai.google.dev/edge/mediapipe/solutions/vision/hand_landmarker) framework.

The BlazeHand models have been converted from TFLite to ONNX for use in Sentis using [tf2onnx](https://github.com/onnx/tensorflow-onnx) with the default export parameters.
## Functional API
The BlazeHand detector model takes a (1, 192, 192, 3) input image tensor and outputs a (1, 2016, 18) boxes tensor and a (1, 2016, 1) scores tensor.
Each of the 2016 boxes consists of:
- [x position, y position, width, height] for the palm bounding box. The position is relative to the anchor position for the given index, these are precalculated and loaded from a csv file.
- [x position, y position] for each of 7 palm keypoints relative to the anchor position.
We adapt the model using the Sentis functional API to apply arg max to filter the box with the highest score.
```
var detectionScores = ScoreFiltering(rawScores, 100f); // (1, 2254, 1)
var bestScoreIndex = Functional.ArgMax(rawScores, 1).Squeeze();
var selectedBoxes = Functional.IndexSelect(rawBoxes, 1, bestScoreIndex).Unsqueeze(0); // (1, 1, 16)
var selectedScores = Functional.IndexSelect(detectionScores, 1, bestScoreIndex).Unsqueeze(0); // (1, 1, 1)
```
The BlazeHand landmarker model takes a (1, 224, 224, 3) input image tensor cropped to the detected hand and outputs a (1, 63) tensor consisting of the x, y, and z coordinates for each of 21 hand keypoints. We use this model without adaptation.
## Model inference
We use the dimensions of the texture to set up an affine transformation matrix to go from the 192x192 tensor coordinates to the image coordinates. We then fill the input tensor using a compute shader with this affine transformation, points outside the image will correspond to zeros in the input tensor.
```
var size = Mathf.Max(texture.width, texture.height);
// The affine transformation matrix to go from tensor coordinates to image coordinates
var scale = size / (float)detectorInputSize;
var M = BlazeUtils.mul(BlazeUtils.TranslationMatrix(0.5f * (new Vector2(texture.width, texture.height) + new Vector2(-size, size))), BlazeUtils.ScaleMatrix(new Vector2(scale, -scale)));
BlazeUtils.SampleImageAffine(texture, m_DetectorInput, M);
m_HandDetectorWorker.Schedule(m_DetectorInput);
```
Execution is scheduled using an [Awaitable](https://docs.unity3d.com/6000.0/Documentation/ScriptReference/Awaitable.html) and the output tensors are downloaded and awaited. This frees up the main thread while the GPU computation and download takes place.
```
var outputIdxAwaitable = (m_HandDetectorWorker.PeekOutput(0) as Tensor<int>).ReadbackAndCloneAsync();
var outputScoreAwaitable = (m_HandDetectorWorker.PeekOutput(1) as Tensor<float>).ReadbackAndCloneAsync();
var outputBoxAwaitable = (m_HandDetectorWorker.PeekOutput(2) as Tensor<float>).ReadbackAndCloneAsync();
using var outputIdx = await outputIdxAwaitable;
using var outputScore = await outputScoreAwaitable;
using var outputBox = await outputBoxAwaitable;
```
The output tensors of the detector model are now on the CPU and can be read. If the score passes our score threshold, we use the box position and keypoint positions to set up a second affine transformation. This is calculated so that the hand will be centred, rotated and scaled to fill the landmarker input tensor.

```
var boxCentre_TensorSpace = anchorPosition + new float2(outputBox[0, 0, 0], outputBox[0, 0, 1]);
var boxSize_TensorSpace = math.max(outputBox[0, 0, 2], outputBox[0, 0, 3]);
var kp0_TensorSpace = anchorPosition + new float2(outputBox[0, 0, 4 + 2 * 0 + 0], outputBox[0, 0, 4 + 2 * 0 + 1]);
var kp2_TensorSpace = anchorPosition + new float2(outputBox[0, 0, 4 + 2 * 2 + 0], outputBox[0, 0, 4 + 2 * 2 + 1]);
var delta_TensorSpace = kp2_TensorSpace - kp0_TensorSpace;
var up_TensorSpace = delta_TensorSpace / math.length(delta_TensorSpace);
var theta = math.atan2(delta_TensorSpace.y, delta_TensorSpace.x);
var rotation = 0.5f * Mathf.PI - theta;
boxCentre_TensorSpace += 0.5f * boxSize_TensorSpace * up_TensorSpace;
boxSize_TensorSpace *= 2.6f;
var origin2 = new float2(0.5f * landmarkerInputSize, 0.5f * landmarkerInputSize);
var scale2 = boxSize_TensorSpace / landmarkerInputSize;
var M2 = BlazeUtils.mul(M, BlazeUtils.mul(BlazeUtils.mul(BlazeUtils.mul(BlazeUtils.TranslationMatrix(boxCentre_TensorSpace), BlazeUtils.ScaleMatrix(new float2(scale2, -scale2))), BlazeUtils.RotationMatrix(rotation)), BlazeUtils.TranslationMatrix(-origin2)));
BlazeUtils.SampleImageAffine(texture, m_LandmarkerInput, M2);
m_HandLandmarkerWorker.Schedule(m_LandmarkerInput);
```
The output tensor of the landmarker model is asynchronously downloaded and once the values are on the CPU we use them together with the affine transformation matrix to set the transforms on the keypoints for visualization.
## WebGPU
Unity 6 supports access to the WebGPU backend in early access. Sentis has full support for running models on the web using the WebGPU backend. Discover how to gain early access and test WebGPU in our [graphics forum](https://discussions.unity.com/t/early-access-to-the-new-webgpu-backend-in-unity-2023-3/933493).

|
{"library_name": "unity-sentis", "license": "apache-2.0", "pipeline_tag": "object-detection"}
|
task
|
[
"TRANSLATION"
] | 42,479 |
DDUKDAE/t5-small-custom
|
DDUKDAE
| null |
[
"safetensors",
"t5",
"region:us"
] | 2024-09-20T04:28:57Z |
2024-09-20T04:29:21+00:00
| 10 | 0 |
---
{}
---
#
# Model Card for t5_small Summarization Model
## Model Details
This model is a fine-tuned version of T5-small for text summarization tasks using the CNN/DailyMail dataset.
## Training Data
The model was trained on a subset (1%) of the CNN/DailyMail dataset, which consists of news articles and their corresponding highlights.
## Training Procedure
- **Learning Rate**: 2e-5
- **Epochs**: 1
- **Batch Size**: 4
- **Max Length**: 512
## How to Use
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("./latest_checkpoint")
model = AutoModelForSeq2SeqLM.from_pretrained("./latest_checkpoint")
## Evaluation
Loss: 0.211
ROUGE-1: 1.59
ROUGE-2: 0.66
ROUGE-L: 1.39
BLEU-1: 61.39
BLEU-2: 30.85
BLEU-4: 11.25
## Limitations
The model may occasionally omit important details or introduce factual inconsistencies in the generated summaries. It also has limited understanding of context in very long articles.
## Ethical Considerations
Bias: The model may reflect biases present in the CNN/DailyMail dataset.
Factual Accuracy: Users should verify the accuracy of generated summaries before use, especially in critical applications.
| null |
Non_BioNLP
|
#
# Model Card for t5_small Summarization Model
## Model Details
This model is a fine-tuned version of T5-small for text summarization tasks using the CNN/DailyMail dataset.
## Training Data
The model was trained on a subset (1%) of the CNN/DailyMail dataset, which consists of news articles and their corresponding highlights.
## Training Procedure
- **Learning Rate**: 2e-5
- **Epochs**: 1
- **Batch Size**: 4
- **Max Length**: 512
## How to Use
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("./latest_checkpoint")
model = AutoModelForSeq2SeqLM.from_pretrained("./latest_checkpoint")
## Evaluation
Loss: 0.211
ROUGE-1: 1.59
ROUGE-2: 0.66
ROUGE-L: 1.39
BLEU-1: 61.39
BLEU-2: 30.85
BLEU-4: 11.25
## Limitations
The model may occasionally omit important details or introduce factual inconsistencies in the generated summaries. It also has limited understanding of context in very long articles.
## Ethical Considerations
Bias: The model may reflect biases present in the CNN/DailyMail dataset.
Factual Accuracy: Users should verify the accuracy of generated summaries before use, especially in critical applications.
|
{}
|
task
|
[
"SUMMARIZATION"
] | 42,480 |
henryscheible/mnli_bert-base-uncased_81
|
henryscheible
| null |
[
"pytorch",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"region:us"
] | 2023-01-18T22:39:08Z |
2023-01-19T01:26:04+00:00
| 0 | 0 |
---
datasets:
- glue
language:
- en
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: mnli_bert-base-uncased_81
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE MNLI
type: glue
args: mnli
metrics:
- type: accuracy
value: 0.8206875508543532
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mnli_bert-base-uncased_81
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4882
- Accuracy: 0.8207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 400
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mnli_bert-base-uncased_81
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4882
- Accuracy: 0.8207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 400
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
{"datasets": ["glue"], "language": ["en"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "mnli_bert-base-uncased_81", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE MNLI", "type": "glue", "args": "mnli"}, "metrics": [{"type": "accuracy", "value": 0.8206875508543532, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,481 |
ak2603/mt5-small-synthetic-data-plus-translated-bs32
|
ak2603
|
summarization
|
[
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2025-01-25T11:45:46Z |
2025-01-25T11:55:49+00:00
| 15 | 0 |
---
base_model: google/mt5-small
library_name: transformers
license: apache-2.0
metrics:
- rouge
tags:
- summarization
- generated_from_trainer
model-index:
- name: mt5-small-synthetic-data-plus-translated-bs32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-synthetic-data-plus-translated-bs32
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8369
- Rouge1: 0.6206
- Rouge2: 0.4859
- Rougel: 0.5972
- Rougelsum: 0.5979
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 19.4785 | 1.0 | 38 | 11.5404 | 0.0055 | 0.0008 | 0.0051 | 0.0051 |
| 11.9977 | 2.0 | 76 | 6.4079 | 0.0101 | 0.0015 | 0.0089 | 0.0094 |
| 7.5027 | 3.0 | 114 | 3.0626 | 0.0542 | 0.0093 | 0.0482 | 0.0487 |
| 4.8939 | 4.0 | 152 | 2.2496 | 0.0492 | 0.0182 | 0.0429 | 0.0437 |
| 3.64 | 5.0 | 190 | 1.7984 | 0.1870 | 0.0826 | 0.1598 | 0.1601 |
| 2.8662 | 6.0 | 228 | 1.4518 | 0.1852 | 0.0916 | 0.1653 | 0.1659 |
| 2.4493 | 7.0 | 266 | 1.3124 | 0.4183 | 0.2586 | 0.4014 | 0.4026 |
| 2.1362 | 8.0 | 304 | 1.2444 | 0.4386 | 0.2716 | 0.4176 | 0.4196 |
| 1.9923 | 9.0 | 342 | 1.1876 | 0.4587 | 0.3034 | 0.4387 | 0.4404 |
| 1.8438 | 10.0 | 380 | 1.1486 | 0.5198 | 0.3637 | 0.4979 | 0.4988 |
| 1.7212 | 11.0 | 418 | 1.1031 | 0.5402 | 0.3848 | 0.5160 | 0.5169 |
| 1.6315 | 12.0 | 456 | 1.0707 | 0.5556 | 0.3999 | 0.5325 | 0.5341 |
| 1.5623 | 13.0 | 494 | 1.0437 | 0.5808 | 0.4309 | 0.5583 | 0.5593 |
| 1.5269 | 14.0 | 532 | 1.0188 | 0.5986 | 0.4540 | 0.5773 | 0.5772 |
| 1.4668 | 15.0 | 570 | 0.9982 | 0.5922 | 0.4511 | 0.5731 | 0.5737 |
| 1.4357 | 16.0 | 608 | 0.9777 | 0.5965 | 0.4549 | 0.5768 | 0.5773 |
| 1.3684 | 17.0 | 646 | 0.9623 | 0.6123 | 0.4722 | 0.5901 | 0.5907 |
| 1.3675 | 18.0 | 684 | 0.9461 | 0.6135 | 0.4771 | 0.5915 | 0.5919 |
| 1.3285 | 19.0 | 722 | 0.9324 | 0.6150 | 0.4754 | 0.5916 | 0.5918 |
| 1.288 | 20.0 | 760 | 0.9271 | 0.6179 | 0.4803 | 0.5964 | 0.5968 |
| 1.2529 | 21.0 | 798 | 0.9129 | 0.6156 | 0.4789 | 0.5939 | 0.5940 |
| 1.2216 | 22.0 | 836 | 0.9017 | 0.6163 | 0.4817 | 0.5941 | 0.5941 |
| 1.2322 | 23.0 | 874 | 0.8948 | 0.6208 | 0.4839 | 0.5985 | 0.5986 |
| 1.2062 | 24.0 | 912 | 0.8838 | 0.6139 | 0.4778 | 0.5904 | 0.5912 |
| 1.1642 | 25.0 | 950 | 0.8761 | 0.6150 | 0.4818 | 0.5939 | 0.5951 |
| 1.1699 | 26.0 | 988 | 0.8759 | 0.6152 | 0.4794 | 0.5929 | 0.5932 |
| 1.1428 | 27.0 | 1026 | 0.8662 | 0.6158 | 0.4806 | 0.5935 | 0.5946 |
| 1.195 | 28.0 | 1064 | 0.8609 | 0.6126 | 0.4758 | 0.5898 | 0.5908 |
| 1.1619 | 29.0 | 1102 | 0.8568 | 0.6152 | 0.4776 | 0.5924 | 0.5936 |
| 1.1172 | 30.0 | 1140 | 0.8548 | 0.6181 | 0.4788 | 0.5951 | 0.5964 |
| 1.1141 | 31.0 | 1178 | 0.8526 | 0.6148 | 0.4766 | 0.5904 | 0.5914 |
| 1.1176 | 32.0 | 1216 | 0.8488 | 0.6201 | 0.4834 | 0.5963 | 0.5972 |
| 1.0959 | 33.0 | 1254 | 0.8475 | 0.6225 | 0.4847 | 0.5983 | 0.5993 |
| 1.0954 | 34.0 | 1292 | 0.8437 | 0.6220 | 0.4859 | 0.5987 | 0.5986 |
| 1.0844 | 35.0 | 1330 | 0.8420 | 0.6206 | 0.4851 | 0.5969 | 0.5974 |
| 1.1041 | 36.0 | 1368 | 0.8398 | 0.6222 | 0.4865 | 0.5991 | 0.5992 |
| 1.0736 | 37.0 | 1406 | 0.8386 | 0.6225 | 0.4867 | 0.5991 | 0.6001 |
| 1.0816 | 38.0 | 1444 | 0.8376 | 0.6229 | 0.4871 | 0.5994 | 0.6001 |
| 1.0537 | 39.0 | 1482 | 0.8372 | 0.6242 | 0.4876 | 0.6004 | 0.6013 |
| 1.092 | 40.0 | 1520 | 0.8369 | 0.6206 | 0.4859 | 0.5972 | 0.5979 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-synthetic-data-plus-translated-bs32
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8369
- Rouge1: 0.6206
- Rouge2: 0.4859
- Rougel: 0.5972
- Rougelsum: 0.5979
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 19.4785 | 1.0 | 38 | 11.5404 | 0.0055 | 0.0008 | 0.0051 | 0.0051 |
| 11.9977 | 2.0 | 76 | 6.4079 | 0.0101 | 0.0015 | 0.0089 | 0.0094 |
| 7.5027 | 3.0 | 114 | 3.0626 | 0.0542 | 0.0093 | 0.0482 | 0.0487 |
| 4.8939 | 4.0 | 152 | 2.2496 | 0.0492 | 0.0182 | 0.0429 | 0.0437 |
| 3.64 | 5.0 | 190 | 1.7984 | 0.1870 | 0.0826 | 0.1598 | 0.1601 |
| 2.8662 | 6.0 | 228 | 1.4518 | 0.1852 | 0.0916 | 0.1653 | 0.1659 |
| 2.4493 | 7.0 | 266 | 1.3124 | 0.4183 | 0.2586 | 0.4014 | 0.4026 |
| 2.1362 | 8.0 | 304 | 1.2444 | 0.4386 | 0.2716 | 0.4176 | 0.4196 |
| 1.9923 | 9.0 | 342 | 1.1876 | 0.4587 | 0.3034 | 0.4387 | 0.4404 |
| 1.8438 | 10.0 | 380 | 1.1486 | 0.5198 | 0.3637 | 0.4979 | 0.4988 |
| 1.7212 | 11.0 | 418 | 1.1031 | 0.5402 | 0.3848 | 0.5160 | 0.5169 |
| 1.6315 | 12.0 | 456 | 1.0707 | 0.5556 | 0.3999 | 0.5325 | 0.5341 |
| 1.5623 | 13.0 | 494 | 1.0437 | 0.5808 | 0.4309 | 0.5583 | 0.5593 |
| 1.5269 | 14.0 | 532 | 1.0188 | 0.5986 | 0.4540 | 0.5773 | 0.5772 |
| 1.4668 | 15.0 | 570 | 0.9982 | 0.5922 | 0.4511 | 0.5731 | 0.5737 |
| 1.4357 | 16.0 | 608 | 0.9777 | 0.5965 | 0.4549 | 0.5768 | 0.5773 |
| 1.3684 | 17.0 | 646 | 0.9623 | 0.6123 | 0.4722 | 0.5901 | 0.5907 |
| 1.3675 | 18.0 | 684 | 0.9461 | 0.6135 | 0.4771 | 0.5915 | 0.5919 |
| 1.3285 | 19.0 | 722 | 0.9324 | 0.6150 | 0.4754 | 0.5916 | 0.5918 |
| 1.288 | 20.0 | 760 | 0.9271 | 0.6179 | 0.4803 | 0.5964 | 0.5968 |
| 1.2529 | 21.0 | 798 | 0.9129 | 0.6156 | 0.4789 | 0.5939 | 0.5940 |
| 1.2216 | 22.0 | 836 | 0.9017 | 0.6163 | 0.4817 | 0.5941 | 0.5941 |
| 1.2322 | 23.0 | 874 | 0.8948 | 0.6208 | 0.4839 | 0.5985 | 0.5986 |
| 1.2062 | 24.0 | 912 | 0.8838 | 0.6139 | 0.4778 | 0.5904 | 0.5912 |
| 1.1642 | 25.0 | 950 | 0.8761 | 0.6150 | 0.4818 | 0.5939 | 0.5951 |
| 1.1699 | 26.0 | 988 | 0.8759 | 0.6152 | 0.4794 | 0.5929 | 0.5932 |
| 1.1428 | 27.0 | 1026 | 0.8662 | 0.6158 | 0.4806 | 0.5935 | 0.5946 |
| 1.195 | 28.0 | 1064 | 0.8609 | 0.6126 | 0.4758 | 0.5898 | 0.5908 |
| 1.1619 | 29.0 | 1102 | 0.8568 | 0.6152 | 0.4776 | 0.5924 | 0.5936 |
| 1.1172 | 30.0 | 1140 | 0.8548 | 0.6181 | 0.4788 | 0.5951 | 0.5964 |
| 1.1141 | 31.0 | 1178 | 0.8526 | 0.6148 | 0.4766 | 0.5904 | 0.5914 |
| 1.1176 | 32.0 | 1216 | 0.8488 | 0.6201 | 0.4834 | 0.5963 | 0.5972 |
| 1.0959 | 33.0 | 1254 | 0.8475 | 0.6225 | 0.4847 | 0.5983 | 0.5993 |
| 1.0954 | 34.0 | 1292 | 0.8437 | 0.6220 | 0.4859 | 0.5987 | 0.5986 |
| 1.0844 | 35.0 | 1330 | 0.8420 | 0.6206 | 0.4851 | 0.5969 | 0.5974 |
| 1.1041 | 36.0 | 1368 | 0.8398 | 0.6222 | 0.4865 | 0.5991 | 0.5992 |
| 1.0736 | 37.0 | 1406 | 0.8386 | 0.6225 | 0.4867 | 0.5991 | 0.6001 |
| 1.0816 | 38.0 | 1444 | 0.8376 | 0.6229 | 0.4871 | 0.5994 | 0.6001 |
| 1.0537 | 39.0 | 1482 | 0.8372 | 0.6242 | 0.4876 | 0.6004 | 0.6013 |
| 1.092 | 40.0 | 1520 | 0.8369 | 0.6206 | 0.4859 | 0.5972 | 0.5979 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
{"base_model": "google/mt5-small", "library_name": "transformers", "license": "apache-2.0", "metrics": ["rouge"], "tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "mt5-small-synthetic-data-plus-translated-bs32", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 42,482 |
prithivMLmods/Llama-3.1-5B-Instruct
|
prithivMLmods
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama3.1-5B",
"llama-3",
"Base_Ft",
"facebook",
"text-generation-inference",
"meta",
"ollama",
"conversational",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"license:llama3.1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2025-01-04T09:21:25Z |
2025-01-07T10:35:05+00:00
| 142 | 1 |
---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
license: llama3.1
pipeline_tag: text-generation
tags:
- llama3.1-5B
- llama-3
- Base_Ft
- facebook
- text-generation-inference
- meta
- ollama
---
# **Llama-3.1-5B-Instruct**
Llama-3.1 is a collection of multilingual large language models (LLMs) that includes pretrained and instruction-tuned generative models in various sizes. The **Llama-3.1-5B-Instruct** model is part of the series optimized for multilingual dialogue use cases, offering powerful conversational abilities and outperforming many open-source and closed chat models on key industry benchmarks.
## Model Overview
- **Size**: 5B parameters
- **Model Architecture**: Llama-3.1 is an auto-regressive language model using an optimized transformer architecture.
- **Training**: The model is fine-tuned using Supervised Fine-Tuning (SFT) and Reinforcement Learning with Human Feedback (RLHF) to align with human preferences, ensuring helpfulness, safety, and natural conversations.
The **Llama-3.1-5B-Instruct** model is optimized for multilingual text generation and excels in a variety of dialog-based use cases. It is designed to handle a wide array of tasks, including question answering, translation, and instruction following.
## How to Use
### Requirements
- Install the latest version of **Transformers**:
```bash
pip install --upgrade transformers
```
- Ensure you have **PyTorch** installed with support for `bfloat16`:
```bash
pip install torch
```
### Example Code
Below is an example of how to use the **Llama-3.1-5B-Instruct** model for conversational inference:
```python
import transformers
import torch
# Define the model ID
model_id = "prithivMLmods/Llama-3.1-5B-Instruct"
# Set up the pipeline for text generation
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto", # Use the best device available
)
# Define conversation messages
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
# Generate a response
outputs = pipeline(
messages,
max_new_tokens=256,
)
# Print the generated response
print(outputs[0]["generated_text"][-1])
```
### Model Details
- **Model Type**: Instruction-Tuned Large Language Model (LLM)
- **Training**: Trained using supervised fine-tuning and reinforcement learning with human feedback.
- **Supported Tasks**: Dialogue generation, question answering, translation, and other text-based tasks.
### Performance
The **Llama-3.1-5B-Instruct** model outperforms many existing models on several benchmarks, making it a reliable choice for conversational AI tasks in multilingual environments.
### Notes
- This model is optimized for safety and helpfulness, ensuring a positive user experience.
- The **torch_dtype** is set to `bfloat16` to optimize memory usage and performance.
---
| null |
Non_BioNLP
|
# **Llama-3.1-5B-Instruct**
Llama-3.1 is a collection of multilingual large language models (LLMs) that includes pretrained and instruction-tuned generative models in various sizes. The **Llama-3.1-5B-Instruct** model is part of the series optimized for multilingual dialogue use cases, offering powerful conversational abilities and outperforming many open-source and closed chat models on key industry benchmarks.
## Model Overview
- **Size**: 5B parameters
- **Model Architecture**: Llama-3.1 is an auto-regressive language model using an optimized transformer architecture.
- **Training**: The model is fine-tuned using Supervised Fine-Tuning (SFT) and Reinforcement Learning with Human Feedback (RLHF) to align with human preferences, ensuring helpfulness, safety, and natural conversations.
The **Llama-3.1-5B-Instruct** model is optimized for multilingual text generation and excels in a variety of dialog-based use cases. It is designed to handle a wide array of tasks, including question answering, translation, and instruction following.
## How to Use
### Requirements
- Install the latest version of **Transformers**:
```bash
pip install --upgrade transformers
```
- Ensure you have **PyTorch** installed with support for `bfloat16`:
```bash
pip install torch
```
### Example Code
Below is an example of how to use the **Llama-3.1-5B-Instruct** model for conversational inference:
```python
import transformers
import torch
# Define the model ID
model_id = "prithivMLmods/Llama-3.1-5B-Instruct"
# Set up the pipeline for text generation
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto", # Use the best device available
)
# Define conversation messages
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
# Generate a response
outputs = pipeline(
messages,
max_new_tokens=256,
)
# Print the generated response
print(outputs[0]["generated_text"][-1])
```
### Model Details
- **Model Type**: Instruction-Tuned Large Language Model (LLM)
- **Training**: Trained using supervised fine-tuning and reinforcement learning with human feedback.
- **Supported Tasks**: Dialogue generation, question answering, translation, and other text-based tasks.
### Performance
The **Llama-3.1-5B-Instruct** model outperforms many existing models on several benchmarks, making it a reliable choice for conversational AI tasks in multilingual environments.
### Notes
- This model is optimized for safety and helpfulness, ensuring a positive user experience.
- The **torch_dtype** is set to `bfloat16` to optimize memory usage and performance.
---
|
{"language": ["en", "de", "fr", "it", "pt", "hi", "es", "th"], "library_name": "transformers", "license": "llama3.1", "pipeline_tag": "text-generation", "tags": ["llama3.1-5B", "llama-3", "Base_Ft", "facebook", "text-generation-inference", "meta", "ollama"]}
|
task
|
[
"QUESTION_ANSWERING",
"TRANSLATION"
] | 42,483 |
mohammad-shirkhani/finetune_persian_to_english_mt5_small_on_celeba_hq_summarize
|
mohammad-shirkhani
| null |
[
"safetensors",
"mt5",
"region:us"
] | 2024-08-07T15:53:17Z |
2024-08-16T15:46:39+00:00
| 9 | 0 |
---
{}
---
# Persian-to-Image Text-to-Image Pipeline
## Model Overview
This model pipeline is designed to generate images from Persian text descriptions. It works by first translating the Persian text into English and then using a fine-tuned Stable Diffusion model to generate the corresponding image. The pipeline combines two models: a translation model (`mohammad-shirkhani/finetune_persian_to_english_mt5_base_summarize_on_celeba_hq`) and an image generation model (`ebrahim-k/Stable-Diffusion-1_5-FT-celeba_HQ_en`).
## Model Details
### Translation Model
- **Model Name**: `mohammad-shirkhani/finetune_persian_to_english_mt5_base_summarize_on_celeba_hq`
- **Architecture**: mT5
- **Purpose**: This model translates Persian text into English. It has been fine-tuned on the CelebA-HQ dataset for summarization tasks, making it effective for translating descriptions of facial features.
### Image Generation Model
- **Model Name**: `ebrahim-k/Stable-Diffusion-1_5-FT-celeba_HQ_en`
- **Architecture**: Stable Diffusion 1.5
- **Purpose**: This model generates high-quality images from English text produced by the translation model. It has been fine-tuned on the CelebA-HQ dataset, which makes it particularly effective for generating realistic human faces based on text descriptions.
## Pipeline Description
The pipeline operates through the following steps:
1. **Text Translation**: The Persian input text is translated into English using the mT5-based translation model.
2. **Image Generation**: The translated English text is then used to generate the corresponding image with the Stable Diffusion model.
### Code Implementation
#### 1. Install Required Libraries
```python
!pip install transformers diffusers accelerate torch
```
#### 2. Import Necessary Libraries
```python
import torch
from transformers import MT5ForConditionalGeneration, T5Tokenizer
from diffusers import StableDiffusionPipeline
```
#### 3. Set Device (GPU or CPU)
This code determines whether the pipeline should use a GPU (if available) or fallback to a CPU.
```python
# Determine the device: GPU if available, otherwise CPU
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(f"Using device: {device}")
```
#### 4. Define and Load the Persian-to-Image Model Class
The following class handles both translation and image generation tasks.
```python
# Define the model class
class PersianToImageModel:
def __init__(self, translation_model_name, image_model_name, device):
self.device = device
# Load translation model
self.translation_model = MT5ForConditionalGeneration.from_pretrained(translation_model_name).to(device)
self.translation_tokenizer = T5Tokenizer.from_pretrained(translation_model_name)
# Load image generation model
self.image_model = StableDiffusionPipeline.from_pretrained(image_model_name).to(device)
def translate_text(self, persian_text):
input_ids = self.translation_tokenizer.encode(persian_text, return_tensors="pt").to(self.device)
translated_ids = self.translation_model.generate(input_ids, max_length=512, num_beams=4, early_stopping=True)
translated_text = self.translation_tokenizer.decode(translated_ids[0], skip_special_tokens=True)
return translated_text
def generate_image(self, english_text):
image = self.image_model(english_text).images[0]
return image
def __call__(self, persian_text):
# Translate Persian text to English
english_text = self.translate_text(persian_text)
print(f"Translated Text: {english_text}")
# Generate and return image
return self.generate_image(english_text)
```
#### 5. Instantiate the Model
The following code snippet demonstrates how to instantiate the combined model.
```python
# Instantiate the combined model
translation_model_name = 'mohammad-shirkhani/finetune_persian_to_english_mt5_base_summarize_on_celeba_hq'
image_model_name = 'ebrahim-k/Stable-Diffusion-1_5-FT-celeba_HQ_en'
persian_to_image_model = PersianToImageModel(translation_model_name, image_model_name, device)
```
#### 6. Example Usage of the Model
Below are examples of how to use the model to generate images from Persian text.
```python
from IPython.display import display
# Persian text describing a person
persian_text = "این زن دارای موهای موج دار ، لب های بزرگ و موهای قهوه ای است و رژ لب دارد.این زن موهای موج دار و لب های بزرگ دارد و رژ لب دارد.فرد جذاب است و موهای موج دار ، چشم های باریک و موهای قهوه ای دارد."
# Generate and display the image
image = persian_to_image_model(persian_text)
display(image)
# Another example
persian_text2 = "این مرد جذاب دارای موهای قهوه ای ، سوزش های جانبی ، دهان کمی باز و کیسه های زیر چشم است.این فرد جذاب دارای کیسه های زیر چشم ، سوزش های جانبی و دهان کمی باز است."
image2 = persian_to_image_model(persian_text2)
display(image2)
```
| null |
Non_BioNLP
|
# Persian-to-Image Text-to-Image Pipeline
## Model Overview
This model pipeline is designed to generate images from Persian text descriptions. It works by first translating the Persian text into English and then using a fine-tuned Stable Diffusion model to generate the corresponding image. The pipeline combines two models: a translation model (`mohammad-shirkhani/finetune_persian_to_english_mt5_base_summarize_on_celeba_hq`) and an image generation model (`ebrahim-k/Stable-Diffusion-1_5-FT-celeba_HQ_en`).
## Model Details
### Translation Model
- **Model Name**: `mohammad-shirkhani/finetune_persian_to_english_mt5_base_summarize_on_celeba_hq`
- **Architecture**: mT5
- **Purpose**: This model translates Persian text into English. It has been fine-tuned on the CelebA-HQ dataset for summarization tasks, making it effective for translating descriptions of facial features.
### Image Generation Model
- **Model Name**: `ebrahim-k/Stable-Diffusion-1_5-FT-celeba_HQ_en`
- **Architecture**: Stable Diffusion 1.5
- **Purpose**: This model generates high-quality images from English text produced by the translation model. It has been fine-tuned on the CelebA-HQ dataset, which makes it particularly effective for generating realistic human faces based on text descriptions.
## Pipeline Description
The pipeline operates through the following steps:
1. **Text Translation**: The Persian input text is translated into English using the mT5-based translation model.
2. **Image Generation**: The translated English text is then used to generate the corresponding image with the Stable Diffusion model.
### Code Implementation
#### 1. Install Required Libraries
```python
!pip install transformers diffusers accelerate torch
```
#### 2. Import Necessary Libraries
```python
import torch
from transformers import MT5ForConditionalGeneration, T5Tokenizer
from diffusers import StableDiffusionPipeline
```
#### 3. Set Device (GPU or CPU)
This code determines whether the pipeline should use a GPU (if available) or fallback to a CPU.
```python
# Determine the device: GPU if available, otherwise CPU
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(f"Using device: {device}")
```
#### 4. Define and Load the Persian-to-Image Model Class
The following class handles both translation and image generation tasks.
```python
# Define the model class
class PersianToImageModel:
def __init__(self, translation_model_name, image_model_name, device):
self.device = device
# Load translation model
self.translation_model = MT5ForConditionalGeneration.from_pretrained(translation_model_name).to(device)
self.translation_tokenizer = T5Tokenizer.from_pretrained(translation_model_name)
# Load image generation model
self.image_model = StableDiffusionPipeline.from_pretrained(image_model_name).to(device)
def translate_text(self, persian_text):
input_ids = self.translation_tokenizer.encode(persian_text, return_tensors="pt").to(self.device)
translated_ids = self.translation_model.generate(input_ids, max_length=512, num_beams=4, early_stopping=True)
translated_text = self.translation_tokenizer.decode(translated_ids[0], skip_special_tokens=True)
return translated_text
def generate_image(self, english_text):
image = self.image_model(english_text).images[0]
return image
def __call__(self, persian_text):
# Translate Persian text to English
english_text = self.translate_text(persian_text)
print(f"Translated Text: {english_text}")
# Generate and return image
return self.generate_image(english_text)
```
#### 5. Instantiate the Model
The following code snippet demonstrates how to instantiate the combined model.
```python
# Instantiate the combined model
translation_model_name = 'mohammad-shirkhani/finetune_persian_to_english_mt5_base_summarize_on_celeba_hq'
image_model_name = 'ebrahim-k/Stable-Diffusion-1_5-FT-celeba_HQ_en'
persian_to_image_model = PersianToImageModel(translation_model_name, image_model_name, device)
```
#### 6. Example Usage of the Model
Below are examples of how to use the model to generate images from Persian text.
```python
from IPython.display import display
# Persian text describing a person
persian_text = "این زن دارای موهای موج دار ، لب های بزرگ و موهای قهوه ای است و رژ لب دارد.این زن موهای موج دار و لب های بزرگ دارد و رژ لب دارد.فرد جذاب است و موهای موج دار ، چشم های باریک و موهای قهوه ای دارد."
# Generate and display the image
image = persian_to_image_model(persian_text)
display(image)
# Another example
persian_text2 = "این مرد جذاب دارای موهای قهوه ای ، سوزش های جانبی ، دهان کمی باز و کیسه های زیر چشم است.این فرد جذاب دارای کیسه های زیر چشم ، سوزش های جانبی و دهان کمی باز است."
image2 = persian_to_image_model(persian_text2)
display(image2)
```
|
{}
|
task
|
[
"TRANSLATION",
"SUMMARIZATION"
] | 42,484 |
fakezeta/Phi-3.5-mini-instruct-ov-int8
|
fakezeta
|
text-generation
|
[
"transformers",
"openvino",
"phi3",
"text-generation",
"nlp",
"code",
"conversational",
"custom_code",
"multilingual",
"arxiv:2404.14219",
"arxiv:2407.13833",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-08-29T17:12:00Z |
2024-08-29T17:15:27+00:00
| 14 | 0 |
---
language:
- multilingual
library_name: transformers
license: mit
license_link: https://huggingface.co/microsoft/Phi-3.5-mini-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
tags:
- nlp
- code
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
# OpenVINO IR model with int8 quantization
Model definition for LocalAI:
```yaml
name: phi3.5-mini
backend: transformers
parameters:
model: fakezeta/Phi-3.5-mini-instruct-ov-int8
context_size: 8192
type: OVModelForCausalLM
trust_remote_code: true
template:
use_tokenizer_template: true
```
To run the model directly with LocalAI:
```
local-ai run huggingface://fakezeta/Phi-3.5-mini-instruct-ov-int8/model.yaml
```
## Model Summary
Phi-3.5-mini is a lightweight, state-of-the-art open model built upon datasets used for Phi-3 - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data. The model belongs to the Phi-3 model family and supports 128K token context length. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning, proximal policy optimization, and direct preference optimization to ensure precise instruction adherence and robust safety measures.
🏡 [Phi-3 Portal](https://azure.microsoft.com/en-us/products/phi-3) <br>
📰 [Phi-3 Microsoft Blog](https://aka.ms/phi3.5-techblog) <br>
📖 [Phi-3 Technical Report](https://arxiv.org/abs/2404.14219) <br>
👩🍳 [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) <br>
🖥️ [Try It](https://aka.ms/try-phi3.5mini) <br>
**Phi-3.5**: [[mini-instruct]](https://huggingface.co/microsoft/Phi-3.5-mini-instruct); [[MoE-instruct]](https://huggingface.co/microsoft/Phi-3.5-MoE-instruct) ; [[vision-instruct]](https://huggingface.co/microsoft/Phi-3.5-vision-instruct)
## Intended Uses
### Primary Use Cases
The model is intended for commercial and research use in multiple languages. The model provides uses for general purpose AI systems and applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
### Use Case Considerations
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
***Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.***
## Release Notes
This is an update over the June 2024 instruction-tuned Phi-3 Mini release based on valuable user feedback. The model used additional post-training data leading to substantial gains on multilingual, multi-turn conversation quality, and reasoning capability. We believe most use cases will benefit from this release, but we encourage users to test in their particular AI applications. We appreciate the enthusiastic adoption of the Phi-3 model family, and continue to welcome all feedback from the community.
### Multilingual
The table below highlights multilingual capability of the Phi-3.5 Mini on multilingual MMLU, MEGA, and multilingual MMLU-pro datasets. Overall, we observed that even with just 3.8B active parameters, the model is competitive on multilingual tasks in comparison to other models with a much bigger active parameters.
| Benchmark | Phi-3.5 Mini-Ins | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|----------------------------|------------------|-----------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| Multilingual MMLU | 55.4 | 51.08 | 47.4 | 58.9 | 56.2 | 63.8 | 77.2 | 72.9 |
| Multilingual MMLU-Pro | 30.9 | 30.21 | 15.0 | 34.0 | 21.4 | 43.0 | 57.9 | 53.2 |
| MGSM | 47.9 | 41.56 | 31.8 | 63.3 | 56.7 | 75.1 | 75.8 | 81.7 |
| MEGA MLQA | 61.7 | 55.5 | 43.9 | 61.2 | 45.2 | 54.4 | 61.6 | 70.0 |
| MEGA TyDi QA | 62.2 | 55.9 | 54.0 | 63.7 | 54.5 | 65.6 | 63.6 | 81.8 |
| MEGA UDPOS | 46.5 | 48.1 | 57.2 | 58.2 | 54.1 | 56.6 | 62.4 | 66.0 |
| MEGA XCOPA | 63.1 | 62.4 | 58.8 | 10.8 | 21.1 | 31.2 | 95.0 | 90.3 |
| MEGA XStoryCloze | 73.5 | 73.6 | 75.5 | 92.3 | 71.0 | 87.0 | 20.7 | 96.6 |
| **Average** | **55.2** | **52.3** | **47.9** | **55.3** | **47.5** | **59.6** | **64.3** | **76.6** |
The table below shows Multilingual MMLU scores in some of the supported languages. For more multi-lingual benchmarks and details, see [Appendix A](#appendix-a).
| Benchmark | Phi-3.5 Mini-Ins | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|-----------|------------------|-----------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| Arabic | 44.2 | 35.4 | 33.7 | 45.3 | 49.1 | 56.3 | 73.6 | 67.1 |
| Chinese | 52.6 | 46.9 | 45.9 | 58.2 | 54.4 | 62.7 | 66.7 | 70.8 |
| Dutch | 57.7 | 48.0 | 51.3 | 60.1 | 55.9 | 66.7 | 80.6 | 74.2 |
| French | 61.1 | 61.7 | 53.0 | 63.8 | 62.8 | 67.0 | 82.9 | 75.6 |
| German | 62.4 | 61.3 | 50.1 | 64.5 | 59.9 | 65.7 | 79.5 | 74.3 |
| Italian | 62.8 | 63.1 | 52.5 | 64.1 | 55.9 | 65.7 | 82.6 | 75.9 |
| Russian | 50.4 | 45.3 | 48.9 | 59.0 | 57.4 | 63.2 | 78.7 | 72.6 |
| Spanish | 62.6 | 61.3 | 53.9 | 64.3 | 62.6 | 66.0 | 80.0 | 75.5 |
| Ukrainian | 45.2 | 36.7 | 46.9 | 56.6 | 52.9 | 62.0 | 77.4 | 72.6 |
### Long Context
Phi-3.5-mini supports 128K context length, therefore the model is capable of several long context tasks including long document/meeting summarization, long document QA, long document information retrieval. We see that Phi-3.5-mini is clearly better than Gemma-2 family which only supports 8K context length. Phi-3.5-mini is competitive with other much larger open-weight models such as Llama-3.1-8B-instruct, Mistral-7B-instruct-v0.3, and Mistral-Nemo-12B-instruct-2407.
| Benchmark | Phi-3.5-mini-instruct | Llama-3.1-8B-instruct | Mistral-7B-instruct-v0.3 | Mistral-Nemo-12B-instruct-2407 | Gemini-1.5-Flash | GPT-4o-mini-2024-07-18 (Chat) |
|--|--|--|--|--|--|--|
| GovReport | 25.9 | 25.1 | 26.0 | 25.6 | 27.8 | 24.8 |
| QMSum | 21.3 | 21.6 | 21.3 | 22.1 | 24.0 | 21.7 |
| Qasper | 41.9 | 37.2 | 31.4 | 30.7 | 43.5 | 39.8 |
| SQuALITY | 25.3 | 26.2 | 25.9 | 25.8 | 23.5 | 23.8 |
| SummScreenFD | 16.0 | 17.6 | 17.5 | 18.2 | 16.3 | 17.0 |
| **Average** | **26.1** | **25.5** | **24.4** | **24.5** | **27.0** | **25.4** |
RULER: a retrieval-based benchmark for long context understanding
| Model | 4K | 8K | 16K | 32K | 64K | 128K | Average |
|--|--|--|--|--|--|--|--|
| **Phi-3.5-mini-instruct** | 94.3 | 91.1 | 90.7 | 87.1 | 78.0 | 63.6 | **84.1** |
| **Llama-3.1-8B-instruct** | 95.5 | 93.8 | 91.6 | 87.4 | 84.7 | 77.0 | **88.3** |
| **Mistral-Nemo-12B-instruct-2407** | 87.8 | 87.2 | 87.7 | 69.0 | 46.8 | 19.0 | **66.2** |
RepoQA: a benchmark for long context code understanding
| Model | Python | C++ | Rust | Java | TypeScript | Average |
|--|--|--|--|--|--|--|
| **Phi-3.5-mini-instruct** | 86 | 67 | 73 | 77 | 82 | **77** |
| **Llama-3.1-8B-instruct** | 80 | 65 | 73 | 76 | 63 | **71** |
| **Mistral-7B-instruct-v0.3** | 61 | 57 | 51 | 61 | 80 | **62** |
## Usage
### Requirements
Phi-3 family has been integrated in the `4.43.0` version of `transformers`. The current `transformers` version can be verified with: `pip list | grep transformers`.
Examples of required packages:
```
flash_attn==2.5.8
torch==2.3.1
accelerate==0.31.0
transformers==4.43.0
```
Phi-3.5-mini-instruct is also available in [Azure AI Studio](https://aka.ms/try-phi3.5mini)
### Tokenizer
Phi-3.5-mini-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3.5-mini-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size.
### Input Formats
Given the nature of the training data, the Phi-3.5-mini-instruct model is best suited for prompts using the chat format as follows:
```
<|system|>
You are a helpful assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
### Loading the model locally
After obtaining the Phi-3.5-mini-instruct model checkpoint, users can use this sample code for inference.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3.5-mini-instruct",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3.5-mini-instruct")
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
Notes: If you want to use flash attention, call _AutoModelForCausalLM.from_pretrained()_ with _attn_implementation="flash_attention_2"_
## Responsible AI Considerations
Like other language models, the Phi family of models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: The Phi models are trained primarily on English text and some additional multilingual text. Languages other than English will experience worse performance as well as performance disparities across non-English. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Multilingual performance and safety gaps: We believe it is important to make language models more widely available across different languages, but the Phi 3 models still exhibit challenges common across multilingual releases. As with any deployment of LLMs, developers will be better positioned to test for performance or safety gaps for their linguistic and cultural context and customize the model with additional fine-tuning and appropriate safeguards.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups, cultural contexts, or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: These models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
+ Long Conversation: Phi-3 models, like other models, can in some cases generate responses that are repetitive, unhelpful, or inconsistent in very long chat sessions in both English and non-English languages. Developers are encouraged to place appropriate mitigations, like limiting conversation turns to account for the possible conversational drift
Developers should apply responsible AI best practices, including mapping, measuring, and mitigating risks associated with their specific use case and cultural, linguistic context. Phi-3 family of models are general purpose models. As developers plan to deploy these models for specific use cases, they are encouraged to fine-tune the models for their use case and leverage the models as part of broader AI systems with language-specific safeguards in place. Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess the suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
**Architecture:** Phi-3.5-mini has 3.8B parameters and is a dense decoder-only Transformer model using the same tokenizer as Phi-3 Mini.<br>
**Inputs:** Text. It is best suited for prompts using chat format.<br>
**Context length:** 128K tokens<br>
**GPUs:** 512 H100-80G<br>
**Training time:** 10 days<br>
**Training data:** 3.4T tokens<br>
**Outputs:** Generated text in response to the input<br>
**Dates:** Trained between June and August 2024<br>
**Status:** This is a static model trained on an offline dataset with cutoff date October 2023 for publicly available data. Future versions of the tuned models may be released as we improve models.<br>
**Supported languages:** Arabic, Chinese, Czech, Danish, Dutch, English, Finnish, French, German, Hebrew, Hungarian, Italian, Japanese, Korean, Norwegian, Polish, Portuguese, Russian, Spanish, Swedish, Thai, Turkish, Ukrainian<br>
**Release date:** August 2024<br>
### Training Datasets
Our training data includes a wide variety of sources, totaling 3.4 trillion tokens, and is a combination of
1) publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) high quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. As an example, the result of a game in premier league in a particular day might be good training data for frontier models, but we need to remove such information to leave more model capacity for reasoning for the small size models. More details about data can be found in the [Phi-3 Technical Report](https://arxiv.org/pdf/2404.14219).
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3.5-mini-instruct/resolve/main/sample_finetune.py).
## Benchmarks
We report the results under completion format for Phi-3.5-mini on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Mistral-7B-Instruct-v0.3, Mistral-Nemo-12B-Ins-2407, Llama-3.1-8B-Ins, Gemma-2-9B-Ins, Gemini 1.5 Flash, and GPT-4o-mini-2024-07-18 (Chat).
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark. At the high-level overview of the model quality on representative benchmarks:
| Category | Benchmark | Phi-3.5 Mini-Ins | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|----------------|--------------------------|------------------|--------------------------|---------------------------|------------------|----------------|------------------|------------------------------|
| Popular aggregated benchmark | Arena Hard | 37 | 18.1 | 39.4 | 25.7 | 42 | 55.2 | 75 |
| | BigBench Hard CoT (0-shot) | 69 | 33.4 | 60.2 | 63.4 | 63.5 | 66.7 | 80.4 |
| | MMLU (5-shot) | 69 | 60.3 | 67.2 | 68.1 | 71.3 | 78.7 | 77.2 |
| | MMLU-Pro (0-shot, CoT) | 47.4 | 18 | 40.7 | 44 | 50.1 | 57.2 | 62.8 |
| Reasoning | ARC Challenge (10-shot) | 84.6 | 77.9 | 84.8 | 83.1 | 89.8 | 92.8 | 93.5 |
| | BoolQ (2-shot) | 78 | 80.5 | 82.5 | 82.8 | 85.7 | 85.8 | 88.7 |
| | GPQA (0-shot, CoT) | 30.4 | 15.6 | 28.6 | 26.3 | 29.2 | 37.5 | 41.1 |
| | HellaSwag (5-shot) | 69.4 | 71.6 | 76.7 | 73.5 | 80.9 | 67.5 | 87.1 |
| | OpenBookQA (10-shot) | 79.2 | 78 | 84.4 | 84.8 | 89.6 | 89 | 90 |
| | PIQA (5-shot) | 81 | 73.4 | 83.5 | 81.2 | 83.7 | 87.5 | 88.7 |
| | Social IQA (5-shot) | 74.7 | 73 | 75.3 | 71.8 | 74.7 | 77.8 | 82.9 |
| | TruthfulQA (MC2) (10-shot) | 64 | 64.7 | 68.1 | 69.2 | 76.6 | 76.6 | 78.2 |
| | WinoGrande (5-shot) | 68.5 | 58.1 | 70.4 | 64.7 | 74 | 74.7 | 76.9 |
| Multilingual | Multilingual MMLU (5-shot) | 55.4 | 47.4 | 58.9 | 56.2 | 63.8 | 77.2 | 72.9 |
| | MGSM (0-shot CoT) | 47.9 | 31.8 | 63.3 | 56.7 | 76.4 | 75.8 | 81.7 |
| Math | GSM8K (8-shot, CoT) | 86.2 | 54.4 | 84.2 | 82.4 | 84.9 | 82.4 | 91.3 |
| | MATH (0-shot, CoT) | 48.5 | 19 | 31.2 | 47.6 | 50.9 | 38 | 70.2 |
| Long context | Qasper | 41.9 | 31.4 | 30.7 | 37.2 | 13.9 | 43.5 | 39.8 |
| | SQuALITY | 24.3 | 25.9 | 25.8 | 26.2 | 0 | 23.5 | 23.8 |
| Code Generation| HumanEval (0-shot) | 62.8 | 35.4 | 63.4 | 66.5 | 61 | 74.4 | 86.6 |
| | MBPP (3-shot) | 69.6 | 50.4 | 68.1 | 69.4 | 69.3 | 77.5 | 84.1 |
| **Average** | | **61.4** | **48.5** | **61.3** | **61.0** | **63.3** | **68.5** | **74.9** |
We take a closer look at different categories across public benchmark datasets at the table below:
| Category | Phi-3.5 Mini-Ins | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|----------------------------|------------------|--------------------------|---------------------------|------------------|----------------|------------------|------------------------------|
| Popular aggregated benchmark | 55.6 | 32.5 | 51.9 | 50.3 | 56.7 | 64.5 | 73.9 |
| Reasoning | 70.1 | 65.2 | 72.2 | 70.5 | 75.4 | 77.7 | 80 |
| Language understanding | 62.6 | 62.8 | 67 | 62.9 | 72.8 | 66.6 | 76.8 |
| Robustness | 59.7 | 53.4 | 65.2 | 59.8 | 64.7 | 68.9 | 77.5 |
| Long context | 26.1 | 25.5 | 24.4 | 24.5 | 0 | 27 | 25.4 |
| Math | 67.4 | 36.7 | 57.7 | 65 | 67.9 | 60.2 | 80.8 |
| Code generation | 62 | 43.1 | 56.9 | 65.8 | 58.3 | 66.8 | 69.9 |
| Multilingual | 55.2 | 47.9 | 55.3 | 47.5 | 59.6 | 64.3 | 76.6 |
Overall, the model with only 3.8B-param achieves a similar level of multilingual language understanding and reasoning ability as much larger models.
However, it is still fundamentally limited by its size for certain tasks.
The model simply does not have the capacity to store too much factual knowledge, therefore, users may experience factual incorrectness.
However, we believe such weakness can be resolved by augmenting Phi-3.5 with a search engine, particularly when using the model under RAG settings.
## Safety Evaluation and Red-Teaming
We leveraged various evaluation techniques including red teaming, adversarial conversation simulations, and multilingual safety evaluation benchmark datasets to
evaluate Phi-3.5 models' propensity to produce undesirable outputs across multiple languages and risk categories.
Several approaches were used to compensate for the limitations of one approach alone. Findings across the various evaluation methods indicate that safety
post-training that was done as detailed in the [Phi-3 Safety Post-Training paper](https://arxiv.org/pdf/2407.13833) had a positive impact across multiple languages and risk categories as observed by
refusal rates (refusal to output undesirable outputs) and robustness to jailbreak techniques. Note, however, while comprehensive red team evaluations were conducted
across all models in the prior release of Phi models, red teaming was largely focused on Phi-3.5 MOE across multiple languages and risk categories for this release as
it is the largest and more capable model of the three models. Details on prior red team evaluations across Phi models can be found in the [Phi-3 Safety Post-Training paper](https://arxiv.org/pdf/2407.13833).
For this release, insights from red teaming indicate that the models may refuse to generate undesirable outputs in English, even when the request for undesirable output
is in another language. Models may also be more susceptible to longer multi-turn jailbreak techniques across both English and non-English languages. These findings
highlight the need for industry-wide investment in the development of high-quality safety evaluation datasets across multiple languages, including low resource languages,
and risk areas that account for cultural nuances where those languages are spoken.
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3.5-mini-instruct model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
## License
The model is licensed under the [MIT license](./LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
## Appendix A
#### MGSM
| Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|-----------|------------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| German | 69.6 | 65.2 | 42.4 | 74.4 | 68.4 | 76.8 | 81.6 | 82.8 |
| English | 85.2 | 83.2 | 60.0 | 86.0 | 81.2 | 88.8 | 90.8 | 90.8 |
| Spanish | 79.2 | 77.6 | 46.4 | 75.6 | 66.4 | 82.4 | 84.8 | 86.8 |
| French | 71.6 | 72.8 | 47.2 | 70.4 | 66.8 | 74.4 | 77.2 | 81.6 |
| Japanese | 50.0 | 35.2 | 22.8 | 62.4 | 49.2 | 67.6 | 77.6 | 80.4 |
| Russian | 67.2 | 51.6 | 43.2 | 73.6 | 67.2 | 78.4 | 84.8 | 86.4 |
| Thai | 29.6 | 6.4 | 18.4 | 53.2 | 56.0 | 76.8 | 87.6 | 81.6 |
| Chinese | 60.0 | 52.8 | 42.4 | 66.4 | 68.0 | 72.8 | 82.0 | 82.0 |
#### Multilingual MMLU-pro
| Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|------------|-----------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| Czech | 24.9 | 26.3 | 14.6 | 30.6 | 23.0 | 40.5 | 59.0 | 40.9 |
| English | 47.7 | 46.2 | 17.7 | 39.8 | 43.1 | 49.0 | 66.1 | 62.7 |
| Finnish | 22.3 | 20.5 | 11.5 | 30.4 | 9.7 | 37.5 | 54.5 | 50.1 |
| Norwegian | 29.9 | 27.8 | 14.4 | 33.2 | 22.2 | 44.4 | 60.7 | 59.1 |
| Polish | 25.7 | 26.4 | 16.3 | 33.6 | 9.2 | 41.7 | 53.9 | 42.8 |
| Portuguese | 38.7 | 37.6 | 15.3 | 36.0 | 29.3 | 43.5 | 54.0 | 56.9 |
| Swedish | 30.7 | 28.1 | 15.5 | 34.3 | 16.9 | 42.6 | 57.7 | 55.5 |
#### MEGA
##### MLQA
| Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|-----------|-----------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| Arabic | 54.3 | 32.7 | 23.5 | 31.4 | 31.5 | 57.4 | 63.8 | 64.0 |
| Chinese | 36.1 | 31.8 | 22.4 | 27.4 | 18.6 | 45.4 | 38.1 | 38.9 |
| English | 80.3 | 78.9 | 68.2 | 75.5 | 67.2 | 82.9 | 69.5 | 82.2 |
| German | 61.8 | 59.1 | 49.0 | 57.8 | 38.9 | 63.8 | 55.9 | 64.1 |
| Spanish | 68.8 | 67.0 | 50.3 | 63.6 | 52.7 | 72.8 | 59.6 | 70.1 |
##### TyDi QA
| Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|-----------|-----------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| Arabic | 69.7 | 54.4 | 52.5 | 49.8 | 33.7 | 81.1 | 78.8 | 84.9 |
| English | 82.0 | 82.0 | 60.5 | 77.3 | 65.1 | 82.4 | 60.9 | 81.8 |
| Finnish | 70.3 | 64.3 | 68.6 | 57.1 | 74.4 | 85.7 | 73.5 | 84.8 |
| Japanese | 65.4 | 56.7 | 45.3 | 54.8 | 34.1 | 74.6 | 59.7 | 73.3 |
| Korean | 74.0 | 60.4 | 54.5 | 54.2 | 54.9 | 83.8 | 60.7 | 82.3 |
| Russian | 63.5 | 62.7 | 52.3 | 55.7 | 27.4 | 69.8 | 60.1 | 72.5 |
| Thai | 64.4 | 49.0 | 51.8 | 43.5 | 48.5 | 81.4 | 71.6 | 78.2 |
##### XCOPA
| Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|-----------|-----------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| English | 94.6 | 94.6 | 85.6 | 94.4 | 37.6 | 63.8 | 92.0 | 98.2 |
| Italian | 86.8 | 84.8 | 76.8 | 83.2 | 16.2 | 37.2 | 85.6 | 97.6 |
| Turkish | 58.6 | 57.2 | 61.6 | 56.6 | 38.4 | 60.2 | 91.4 | 94.6 |
| null |
Non_BioNLP
|
# OpenVINO IR model with int8 quantization
Model definition for LocalAI:
```yaml
name: phi3.5-mini
backend: transformers
parameters:
model: fakezeta/Phi-3.5-mini-instruct-ov-int8
context_size: 8192
type: OVModelForCausalLM
trust_remote_code: true
template:
use_tokenizer_template: true
```
To run the model directly with LocalAI:
```
local-ai run huggingface://fakezeta/Phi-3.5-mini-instruct-ov-int8/model.yaml
```
## Model Summary
Phi-3.5-mini is a lightweight, state-of-the-art open model built upon datasets used for Phi-3 - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data. The model belongs to the Phi-3 model family and supports 128K token context length. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning, proximal policy optimization, and direct preference optimization to ensure precise instruction adherence and robust safety measures.
🏡 [Phi-3 Portal](https://azure.microsoft.com/en-us/products/phi-3) <br>
📰 [Phi-3 Microsoft Blog](https://aka.ms/phi3.5-techblog) <br>
📖 [Phi-3 Technical Report](https://arxiv.org/abs/2404.14219) <br>
👩🍳 [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) <br>
🖥️ [Try It](https://aka.ms/try-phi3.5mini) <br>
**Phi-3.5**: [[mini-instruct]](https://huggingface.co/microsoft/Phi-3.5-mini-instruct); [[MoE-instruct]](https://huggingface.co/microsoft/Phi-3.5-MoE-instruct) ; [[vision-instruct]](https://huggingface.co/microsoft/Phi-3.5-vision-instruct)
## Intended Uses
### Primary Use Cases
The model is intended for commercial and research use in multiple languages. The model provides uses for general purpose AI systems and applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
### Use Case Considerations
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
***Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.***
## Release Notes
This is an update over the June 2024 instruction-tuned Phi-3 Mini release based on valuable user feedback. The model used additional post-training data leading to substantial gains on multilingual, multi-turn conversation quality, and reasoning capability. We believe most use cases will benefit from this release, but we encourage users to test in their particular AI applications. We appreciate the enthusiastic adoption of the Phi-3 model family, and continue to welcome all feedback from the community.
### Multilingual
The table below highlights multilingual capability of the Phi-3.5 Mini on multilingual MMLU, MEGA, and multilingual MMLU-pro datasets. Overall, we observed that even with just 3.8B active parameters, the model is competitive on multilingual tasks in comparison to other models with a much bigger active parameters.
| Benchmark | Phi-3.5 Mini-Ins | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|----------------------------|------------------|-----------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| Multilingual MMLU | 55.4 | 51.08 | 47.4 | 58.9 | 56.2 | 63.8 | 77.2 | 72.9 |
| Multilingual MMLU-Pro | 30.9 | 30.21 | 15.0 | 34.0 | 21.4 | 43.0 | 57.9 | 53.2 |
| MGSM | 47.9 | 41.56 | 31.8 | 63.3 | 56.7 | 75.1 | 75.8 | 81.7 |
| MEGA MLQA | 61.7 | 55.5 | 43.9 | 61.2 | 45.2 | 54.4 | 61.6 | 70.0 |
| MEGA TyDi QA | 62.2 | 55.9 | 54.0 | 63.7 | 54.5 | 65.6 | 63.6 | 81.8 |
| MEGA UDPOS | 46.5 | 48.1 | 57.2 | 58.2 | 54.1 | 56.6 | 62.4 | 66.0 |
| MEGA XCOPA | 63.1 | 62.4 | 58.8 | 10.8 | 21.1 | 31.2 | 95.0 | 90.3 |
| MEGA XStoryCloze | 73.5 | 73.6 | 75.5 | 92.3 | 71.0 | 87.0 | 20.7 | 96.6 |
| **Average** | **55.2** | **52.3** | **47.9** | **55.3** | **47.5** | **59.6** | **64.3** | **76.6** |
The table below shows Multilingual MMLU scores in some of the supported languages. For more multi-lingual benchmarks and details, see [Appendix A](#appendix-a).
| Benchmark | Phi-3.5 Mini-Ins | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|-----------|------------------|-----------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| Arabic | 44.2 | 35.4 | 33.7 | 45.3 | 49.1 | 56.3 | 73.6 | 67.1 |
| Chinese | 52.6 | 46.9 | 45.9 | 58.2 | 54.4 | 62.7 | 66.7 | 70.8 |
| Dutch | 57.7 | 48.0 | 51.3 | 60.1 | 55.9 | 66.7 | 80.6 | 74.2 |
| French | 61.1 | 61.7 | 53.0 | 63.8 | 62.8 | 67.0 | 82.9 | 75.6 |
| German | 62.4 | 61.3 | 50.1 | 64.5 | 59.9 | 65.7 | 79.5 | 74.3 |
| Italian | 62.8 | 63.1 | 52.5 | 64.1 | 55.9 | 65.7 | 82.6 | 75.9 |
| Russian | 50.4 | 45.3 | 48.9 | 59.0 | 57.4 | 63.2 | 78.7 | 72.6 |
| Spanish | 62.6 | 61.3 | 53.9 | 64.3 | 62.6 | 66.0 | 80.0 | 75.5 |
| Ukrainian | 45.2 | 36.7 | 46.9 | 56.6 | 52.9 | 62.0 | 77.4 | 72.6 |
### Long Context
Phi-3.5-mini supports 128K context length, therefore the model is capable of several long context tasks including long document/meeting summarization, long document QA, long document information retrieval. We see that Phi-3.5-mini is clearly better than Gemma-2 family which only supports 8K context length. Phi-3.5-mini is competitive with other much larger open-weight models such as Llama-3.1-8B-instruct, Mistral-7B-instruct-v0.3, and Mistral-Nemo-12B-instruct-2407.
| Benchmark | Phi-3.5-mini-instruct | Llama-3.1-8B-instruct | Mistral-7B-instruct-v0.3 | Mistral-Nemo-12B-instruct-2407 | Gemini-1.5-Flash | GPT-4o-mini-2024-07-18 (Chat) |
|--|--|--|--|--|--|--|
| GovReport | 25.9 | 25.1 | 26.0 | 25.6 | 27.8 | 24.8 |
| QMSum | 21.3 | 21.6 | 21.3 | 22.1 | 24.0 | 21.7 |
| Qasper | 41.9 | 37.2 | 31.4 | 30.7 | 43.5 | 39.8 |
| SQuALITY | 25.3 | 26.2 | 25.9 | 25.8 | 23.5 | 23.8 |
| SummScreenFD | 16.0 | 17.6 | 17.5 | 18.2 | 16.3 | 17.0 |
| **Average** | **26.1** | **25.5** | **24.4** | **24.5** | **27.0** | **25.4** |
RULER: a retrieval-based benchmark for long context understanding
| Model | 4K | 8K | 16K | 32K | 64K | 128K | Average |
|--|--|--|--|--|--|--|--|
| **Phi-3.5-mini-instruct** | 94.3 | 91.1 | 90.7 | 87.1 | 78.0 | 63.6 | **84.1** |
| **Llama-3.1-8B-instruct** | 95.5 | 93.8 | 91.6 | 87.4 | 84.7 | 77.0 | **88.3** |
| **Mistral-Nemo-12B-instruct-2407** | 87.8 | 87.2 | 87.7 | 69.0 | 46.8 | 19.0 | **66.2** |
RepoQA: a benchmark for long context code understanding
| Model | Python | C++ | Rust | Java | TypeScript | Average |
|--|--|--|--|--|--|--|
| **Phi-3.5-mini-instruct** | 86 | 67 | 73 | 77 | 82 | **77** |
| **Llama-3.1-8B-instruct** | 80 | 65 | 73 | 76 | 63 | **71** |
| **Mistral-7B-instruct-v0.3** | 61 | 57 | 51 | 61 | 80 | **62** |
## Usage
### Requirements
Phi-3 family has been integrated in the `4.43.0` version of `transformers`. The current `transformers` version can be verified with: `pip list | grep transformers`.
Examples of required packages:
```
flash_attn==2.5.8
torch==2.3.1
accelerate==0.31.0
transformers==4.43.0
```
Phi-3.5-mini-instruct is also available in [Azure AI Studio](https://aka.ms/try-phi3.5mini)
### Tokenizer
Phi-3.5-mini-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3.5-mini-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size.
### Input Formats
Given the nature of the training data, the Phi-3.5-mini-instruct model is best suited for prompts using the chat format as follows:
```
<|system|>
You are a helpful assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
### Loading the model locally
After obtaining the Phi-3.5-mini-instruct model checkpoint, users can use this sample code for inference.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3.5-mini-instruct",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3.5-mini-instruct")
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
Notes: If you want to use flash attention, call _AutoModelForCausalLM.from_pretrained()_ with _attn_implementation="flash_attention_2"_
## Responsible AI Considerations
Like other language models, the Phi family of models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: The Phi models are trained primarily on English text and some additional multilingual text. Languages other than English will experience worse performance as well as performance disparities across non-English. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Multilingual performance and safety gaps: We believe it is important to make language models more widely available across different languages, but the Phi 3 models still exhibit challenges common across multilingual releases. As with any deployment of LLMs, developers will be better positioned to test for performance or safety gaps for their linguistic and cultural context and customize the model with additional fine-tuning and appropriate safeguards.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups, cultural contexts, or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: These models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
+ Long Conversation: Phi-3 models, like other models, can in some cases generate responses that are repetitive, unhelpful, or inconsistent in very long chat sessions in both English and non-English languages. Developers are encouraged to place appropriate mitigations, like limiting conversation turns to account for the possible conversational drift
Developers should apply responsible AI best practices, including mapping, measuring, and mitigating risks associated with their specific use case and cultural, linguistic context. Phi-3 family of models are general purpose models. As developers plan to deploy these models for specific use cases, they are encouraged to fine-tune the models for their use case and leverage the models as part of broader AI systems with language-specific safeguards in place. Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess the suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
**Architecture:** Phi-3.5-mini has 3.8B parameters and is a dense decoder-only Transformer model using the same tokenizer as Phi-3 Mini.<br>
**Inputs:** Text. It is best suited for prompts using chat format.<br>
**Context length:** 128K tokens<br>
**GPUs:** 512 H100-80G<br>
**Training time:** 10 days<br>
**Training data:** 3.4T tokens<br>
**Outputs:** Generated text in response to the input<br>
**Dates:** Trained between June and August 2024<br>
**Status:** This is a static model trained on an offline dataset with cutoff date October 2023 for publicly available data. Future versions of the tuned models may be released as we improve models.<br>
**Supported languages:** Arabic, Chinese, Czech, Danish, Dutch, English, Finnish, French, German, Hebrew, Hungarian, Italian, Japanese, Korean, Norwegian, Polish, Portuguese, Russian, Spanish, Swedish, Thai, Turkish, Ukrainian<br>
**Release date:** August 2024<br>
### Training Datasets
Our training data includes a wide variety of sources, totaling 3.4 trillion tokens, and is a combination of
1) publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) high quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. As an example, the result of a game in premier league in a particular day might be good training data for frontier models, but we need to remove such information to leave more model capacity for reasoning for the small size models. More details about data can be found in the [Phi-3 Technical Report](https://arxiv.org/pdf/2404.14219).
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3.5-mini-instruct/resolve/main/sample_finetune.py).
## Benchmarks
We report the results under completion format for Phi-3.5-mini on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Mistral-7B-Instruct-v0.3, Mistral-Nemo-12B-Ins-2407, Llama-3.1-8B-Ins, Gemma-2-9B-Ins, Gemini 1.5 Flash, and GPT-4o-mini-2024-07-18 (Chat).
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark. At the high-level overview of the model quality on representative benchmarks:
| Category | Benchmark | Phi-3.5 Mini-Ins | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|----------------|--------------------------|------------------|--------------------------|---------------------------|------------------|----------------|------------------|------------------------------|
| Popular aggregated benchmark | Arena Hard | 37 | 18.1 | 39.4 | 25.7 | 42 | 55.2 | 75 |
| | BigBench Hard CoT (0-shot) | 69 | 33.4 | 60.2 | 63.4 | 63.5 | 66.7 | 80.4 |
| | MMLU (5-shot) | 69 | 60.3 | 67.2 | 68.1 | 71.3 | 78.7 | 77.2 |
| | MMLU-Pro (0-shot, CoT) | 47.4 | 18 | 40.7 | 44 | 50.1 | 57.2 | 62.8 |
| Reasoning | ARC Challenge (10-shot) | 84.6 | 77.9 | 84.8 | 83.1 | 89.8 | 92.8 | 93.5 |
| | BoolQ (2-shot) | 78 | 80.5 | 82.5 | 82.8 | 85.7 | 85.8 | 88.7 |
| | GPQA (0-shot, CoT) | 30.4 | 15.6 | 28.6 | 26.3 | 29.2 | 37.5 | 41.1 |
| | HellaSwag (5-shot) | 69.4 | 71.6 | 76.7 | 73.5 | 80.9 | 67.5 | 87.1 |
| | OpenBookQA (10-shot) | 79.2 | 78 | 84.4 | 84.8 | 89.6 | 89 | 90 |
| | PIQA (5-shot) | 81 | 73.4 | 83.5 | 81.2 | 83.7 | 87.5 | 88.7 |
| | Social IQA (5-shot) | 74.7 | 73 | 75.3 | 71.8 | 74.7 | 77.8 | 82.9 |
| | TruthfulQA (MC2) (10-shot) | 64 | 64.7 | 68.1 | 69.2 | 76.6 | 76.6 | 78.2 |
| | WinoGrande (5-shot) | 68.5 | 58.1 | 70.4 | 64.7 | 74 | 74.7 | 76.9 |
| Multilingual | Multilingual MMLU (5-shot) | 55.4 | 47.4 | 58.9 | 56.2 | 63.8 | 77.2 | 72.9 |
| | MGSM (0-shot CoT) | 47.9 | 31.8 | 63.3 | 56.7 | 76.4 | 75.8 | 81.7 |
| Math | GSM8K (8-shot, CoT) | 86.2 | 54.4 | 84.2 | 82.4 | 84.9 | 82.4 | 91.3 |
| | MATH (0-shot, CoT) | 48.5 | 19 | 31.2 | 47.6 | 50.9 | 38 | 70.2 |
| Long context | Qasper | 41.9 | 31.4 | 30.7 | 37.2 | 13.9 | 43.5 | 39.8 |
| | SQuALITY | 24.3 | 25.9 | 25.8 | 26.2 | 0 | 23.5 | 23.8 |
| Code Generation| HumanEval (0-shot) | 62.8 | 35.4 | 63.4 | 66.5 | 61 | 74.4 | 86.6 |
| | MBPP (3-shot) | 69.6 | 50.4 | 68.1 | 69.4 | 69.3 | 77.5 | 84.1 |
| **Average** | | **61.4** | **48.5** | **61.3** | **61.0** | **63.3** | **68.5** | **74.9** |
We take a closer look at different categories across public benchmark datasets at the table below:
| Category | Phi-3.5 Mini-Ins | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|----------------------------|------------------|--------------------------|---------------------------|------------------|----------------|------------------|------------------------------|
| Popular aggregated benchmark | 55.6 | 32.5 | 51.9 | 50.3 | 56.7 | 64.5 | 73.9 |
| Reasoning | 70.1 | 65.2 | 72.2 | 70.5 | 75.4 | 77.7 | 80 |
| Language understanding | 62.6 | 62.8 | 67 | 62.9 | 72.8 | 66.6 | 76.8 |
| Robustness | 59.7 | 53.4 | 65.2 | 59.8 | 64.7 | 68.9 | 77.5 |
| Long context | 26.1 | 25.5 | 24.4 | 24.5 | 0 | 27 | 25.4 |
| Math | 67.4 | 36.7 | 57.7 | 65 | 67.9 | 60.2 | 80.8 |
| Code generation | 62 | 43.1 | 56.9 | 65.8 | 58.3 | 66.8 | 69.9 |
| Multilingual | 55.2 | 47.9 | 55.3 | 47.5 | 59.6 | 64.3 | 76.6 |
Overall, the model with only 3.8B-param achieves a similar level of multilingual language understanding and reasoning ability as much larger models.
However, it is still fundamentally limited by its size for certain tasks.
The model simply does not have the capacity to store too much factual knowledge, therefore, users may experience factual incorrectness.
However, we believe such weakness can be resolved by augmenting Phi-3.5 with a search engine, particularly when using the model under RAG settings.
## Safety Evaluation and Red-Teaming
We leveraged various evaluation techniques including red teaming, adversarial conversation simulations, and multilingual safety evaluation benchmark datasets to
evaluate Phi-3.5 models' propensity to produce undesirable outputs across multiple languages and risk categories.
Several approaches were used to compensate for the limitations of one approach alone. Findings across the various evaluation methods indicate that safety
post-training that was done as detailed in the [Phi-3 Safety Post-Training paper](https://arxiv.org/pdf/2407.13833) had a positive impact across multiple languages and risk categories as observed by
refusal rates (refusal to output undesirable outputs) and robustness to jailbreak techniques. Note, however, while comprehensive red team evaluations were conducted
across all models in the prior release of Phi models, red teaming was largely focused on Phi-3.5 MOE across multiple languages and risk categories for this release as
it is the largest and more capable model of the three models. Details on prior red team evaluations across Phi models can be found in the [Phi-3 Safety Post-Training paper](https://arxiv.org/pdf/2407.13833).
For this release, insights from red teaming indicate that the models may refuse to generate undesirable outputs in English, even when the request for undesirable output
is in another language. Models may also be more susceptible to longer multi-turn jailbreak techniques across both English and non-English languages. These findings
highlight the need for industry-wide investment in the development of high-quality safety evaluation datasets across multiple languages, including low resource languages,
and risk areas that account for cultural nuances where those languages are spoken.
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3.5-mini-instruct model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
## License
The model is licensed under the [MIT license](./LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
## Appendix A
#### MGSM
| Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|-----------|------------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| German | 69.6 | 65.2 | 42.4 | 74.4 | 68.4 | 76.8 | 81.6 | 82.8 |
| English | 85.2 | 83.2 | 60.0 | 86.0 | 81.2 | 88.8 | 90.8 | 90.8 |
| Spanish | 79.2 | 77.6 | 46.4 | 75.6 | 66.4 | 82.4 | 84.8 | 86.8 |
| French | 71.6 | 72.8 | 47.2 | 70.4 | 66.8 | 74.4 | 77.2 | 81.6 |
| Japanese | 50.0 | 35.2 | 22.8 | 62.4 | 49.2 | 67.6 | 77.6 | 80.4 |
| Russian | 67.2 | 51.6 | 43.2 | 73.6 | 67.2 | 78.4 | 84.8 | 86.4 |
| Thai | 29.6 | 6.4 | 18.4 | 53.2 | 56.0 | 76.8 | 87.6 | 81.6 |
| Chinese | 60.0 | 52.8 | 42.4 | 66.4 | 68.0 | 72.8 | 82.0 | 82.0 |
#### Multilingual MMLU-pro
| Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|------------|-----------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| Czech | 24.9 | 26.3 | 14.6 | 30.6 | 23.0 | 40.5 | 59.0 | 40.9 |
| English | 47.7 | 46.2 | 17.7 | 39.8 | 43.1 | 49.0 | 66.1 | 62.7 |
| Finnish | 22.3 | 20.5 | 11.5 | 30.4 | 9.7 | 37.5 | 54.5 | 50.1 |
| Norwegian | 29.9 | 27.8 | 14.4 | 33.2 | 22.2 | 44.4 | 60.7 | 59.1 |
| Polish | 25.7 | 26.4 | 16.3 | 33.6 | 9.2 | 41.7 | 53.9 | 42.8 |
| Portuguese | 38.7 | 37.6 | 15.3 | 36.0 | 29.3 | 43.5 | 54.0 | 56.9 |
| Swedish | 30.7 | 28.1 | 15.5 | 34.3 | 16.9 | 42.6 | 57.7 | 55.5 |
#### MEGA
##### MLQA
| Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|-----------|-----------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| Arabic | 54.3 | 32.7 | 23.5 | 31.4 | 31.5 | 57.4 | 63.8 | 64.0 |
| Chinese | 36.1 | 31.8 | 22.4 | 27.4 | 18.6 | 45.4 | 38.1 | 38.9 |
| English | 80.3 | 78.9 | 68.2 | 75.5 | 67.2 | 82.9 | 69.5 | 82.2 |
| German | 61.8 | 59.1 | 49.0 | 57.8 | 38.9 | 63.8 | 55.9 | 64.1 |
| Spanish | 68.8 | 67.0 | 50.3 | 63.6 | 52.7 | 72.8 | 59.6 | 70.1 |
##### TyDi QA
| Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|-----------|-----------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| Arabic | 69.7 | 54.4 | 52.5 | 49.8 | 33.7 | 81.1 | 78.8 | 84.9 |
| English | 82.0 | 82.0 | 60.5 | 77.3 | 65.1 | 82.4 | 60.9 | 81.8 |
| Finnish | 70.3 | 64.3 | 68.6 | 57.1 | 74.4 | 85.7 | 73.5 | 84.8 |
| Japanese | 65.4 | 56.7 | 45.3 | 54.8 | 34.1 | 74.6 | 59.7 | 73.3 |
| Korean | 74.0 | 60.4 | 54.5 | 54.2 | 54.9 | 83.8 | 60.7 | 82.3 |
| Russian | 63.5 | 62.7 | 52.3 | 55.7 | 27.4 | 69.8 | 60.1 | 72.5 |
| Thai | 64.4 | 49.0 | 51.8 | 43.5 | 48.5 | 81.4 | 71.6 | 78.2 |
##### XCOPA
| Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|-----------|-----------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| English | 94.6 | 94.6 | 85.6 | 94.4 | 37.6 | 63.8 | 92.0 | 98.2 |
| Italian | 86.8 | 84.8 | 76.8 | 83.2 | 16.2 | 37.2 | 85.6 | 97.6 |
| Turkish | 58.6 | 57.2 | 61.6 | 56.6 | 38.4 | 60.2 | 91.4 | 94.6 |
|
{"language": ["multilingual"], "library_name": "transformers", "license": "mit", "license_link": "https://huggingface.co/microsoft/Phi-3.5-mini-instruct/resolve/main/LICENSE", "pipeline_tag": "text-generation", "tags": ["nlp", "code"], "widget": [{"messages": [{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}]}]}
|
task
|
[
"SUMMARIZATION"
] | 42,485 |
onnx-community/opus-mt-tc-big-tr-en
|
onnx-community
|
translation
|
[
"transformers.js",
"onnx",
"marian",
"text2text-generation",
"translation",
"base_model:Helsinki-NLP/opus-mt-tc-big-tr-en",
"base_model:quantized:Helsinki-NLP/opus-mt-tc-big-tr-en",
"license:cc-by-4.0",
"region:us"
] | 2024-08-27T21:27:04Z |
2024-10-08T13:54:11+00:00
| 7 | 0 |
---
base_model: Helsinki-NLP/opus-mt-tc-big-tr-en
library_name: transformers.js
license: cc-by-4.0
pipeline_tag: translation
---
https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-tr-en with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
| null |
Non_BioNLP
| ERROR: type should be string, got "\nhttps://huggingface.co/Helsinki-NLP/opus-mt-tc-big-tr-en with ONNX weights to be compatible with Transformers.js.\n\nNote: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`)." |
{"base_model": "Helsinki-NLP/opus-mt-tc-big-tr-en", "library_name": "transformers.js", "license": "cc-by-4.0", "pipeline_tag": "translation"}
|
task
|
[
"TRANSLATION"
] | 42,486 |
aya-se/distilbert-base-uncased-finetuned-emotion
|
aya-se
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-08-09T16:48:43Z |
2022-08-09T17:00:24+00:00
| 15 | 0 |
---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- type: accuracy
value: 0.922
name: Accuracy
- type: f1
value: 0.9225903813139017
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2064
- Accuracy: 0.922
- F1: 0.9226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 125 | 0.2322 | 0.916 | 0.9164 |
| 0.2717 | 2.0 | 250 | 0.2064 | 0.922 | 0.9226 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 1.2.1
- Tokenizers 0.12.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2064
- Accuracy: 0.922
- F1: 0.9226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 125 | 0.2322 | 0.916 | 0.9164 |
| 0.2717 | 2.0 | 250 | 0.2064 | 0.922 | 0.9226 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 1.2.1
- Tokenizers 0.12.1
|
{"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.922, "name": "Accuracy"}, {"type": "f1", "value": 0.9225903813139017, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,487 |
Sharka/CIVQA_LayoutLMv3_EasyOCR
|
Sharka
|
document-question-answering
|
[
"transformers",
"pytorch",
"layoutlmv3",
"document-question-answering",
"Document Question Answering",
"cs",
"license:mit",
"endpoints_compatible",
"region:us"
] | 2023-12-18T00:12:43Z |
2024-01-27T11:51:54+00:00
| 34 | 0 |
---
language:
- cs
license: mit
tags:
- Document Question Answering
---
# LayoutLMv3 Model Fine-tuned with CIVQA (EasyOCR) dataset
This is a fine-tuned version of the [LayoutLMv3 model](https://huggingface.co/microsoft/layoutlmv3-base), which was trained on Czech Invoice Visual Question Answering (CIVQA) datasets containing invoices in the Czech language.
This model enables Document Visual Question Answering on Czech invoices.
All invoices used in this dataset were obtained from public sources. Over these invoices, we were focusing on 15 different entities, which are crucial for processing the invoices.
- Invoice number
- Variable symbol
- Specific symbol
- Constant symbol
- Bank code
- Account number
- ICO
- Total amount
- Invoice date
- Due date
- Name of supplier
- IBAN
- DIC
- QR code
- Supplier's address
You can find more information about this model in this [paper](https://nlp.fi.muni.cz/raslan/raslan23.pdf#page=31).
| null |
Non_BioNLP
|
# LayoutLMv3 Model Fine-tuned with CIVQA (EasyOCR) dataset
This is a fine-tuned version of the [LayoutLMv3 model](https://huggingface.co/microsoft/layoutlmv3-base), which was trained on Czech Invoice Visual Question Answering (CIVQA) datasets containing invoices in the Czech language.
This model enables Document Visual Question Answering on Czech invoices.
All invoices used in this dataset were obtained from public sources. Over these invoices, we were focusing on 15 different entities, which are crucial for processing the invoices.
- Invoice number
- Variable symbol
- Specific symbol
- Constant symbol
- Bank code
- Account number
- ICO
- Total amount
- Invoice date
- Due date
- Name of supplier
- IBAN
- DIC
- QR code
- Supplier's address
You can find more information about this model in this [paper](https://nlp.fi.muni.cz/raslan/raslan23.pdf#page=31).
|
{"language": ["cs"], "license": "mit", "tags": ["Document Question Answering"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,488 |
ric9176/cjo-ft-v0
|
ric9176
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:154",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:Snowflake/snowflake-arctic-embed-l",
"base_model:finetune:Snowflake/snowflake-arctic-embed-l",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2025-02-24T01:44:22Z |
2025-02-24T01:45:11+00:00
| 202 | 0 |
---
base_model: Snowflake/snowflake-arctic-embed-l
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:154
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Who will be introducing the first and second Joker movies at the
festival?
sentences:
- '13 Apr 2025Photo: Marshmallow Laser FeastSoil – it’s not something you really
think about, unless you’re doing the gardening. But this new exhibition at Somerset
House will change all that, shining a light on its important role in our world,
including the part it plays in our planet’s future. Top artists, writers and scientists
from across the globe are all involved in the thought-provoking exploration, which
aims to stop you thinking of soil as mere dirt and start considering it as something
far more powerful instead.Read moreBuy ticket24. Enjoy stunning views of the River
Thames with three courses at Sea ContainersNiall Clutton'
- favourite movies – the soundtracks. London Soundtrack Festival puts the scores
front and centre in March 2025, with a series of screenings, talks and performances
celebrating the musicians who make Hollywood sound so exciting, tense and emotional.
Highlights include Hildur Guðnadóttir introducing the first and second Joker movies
and, later in the programme, holding her own concert, David Cronenberg and Howard
Shore in conversation, screenings of Charlie Chaplin’s Modern Times, The Silence
of the Lambs and Eighth Grade with live scores, a day-long celebration of video
game music at The Roundhouse ‘Great Movie Songs with Anne Dudley & Friends’ featuring
guest appearances from the likes of the Pet Shop Boys’ Neil Tennant and Jake Shears
of
- Peter Walker Sculptor and David Harper ComposerSt Paul’s is about to get lit. In
February, the cathedral will be transformed via a stunning immersive light and
sound show. ‘Luminous’ by art collective Luxmuralis will animate the interior
of the building with illuminations and soundscapes inspired by its history, collections
and archives. Previously, Luxmuralis has created shows at Westminster Abbey, Durham
Cathedral and Oxford University. The company was also behind the ‘Poppy Fields’
display at the Tower of London in October.
- source_sentence: What is the significance of Haddadi in the given context?
sentences:
- It’s been almost a decade since Red Bull Culture Clash last took place in London,
but finally, it’s making its return in 2025, The epic music battle, inspired by
Jamaican sound clash culture, will see four crews armed with their finest dubplates
go head-to-head, delivering the best of the electronic, UK rap, Afro, and Caribbean
music scenes. Only one can be crowned the winner, though, and take home the Red
Bull Culture Clash trophy, with the victor. The likes of Boy Better Know, A$AP
Mob and Rebel Sound have previously competed at the legendary competition, as
well as special guests like J Hus, Stormzy, and Ice Kid, so crowds can expect
some pretty special things from its return, which takes place at Drumsheds in
March. Read moreBuy
- Haddadi
- The Irish really know how to celebrate, so when it comes to St Patrick’s Day in
London, the city’s Irish community has no problem showing us how it’s done. A
day to celebrate the patron saint of Ireland, the occasion is always one big welcoming
bash. Expect lots of dancing, hearty traditional dishes, a huge parade and as
many pints as you can handle. The Mayor of London’s annual St Patrick’s Day Festival
celebration will take place on Sunday March 16 – a day ahead of the official holiday
– and, as usual, thousands of revellers are expected to watch the parade wend
its way through central London, while there’ll also be plenty more St Patrick’s
Day parties and events to check out around the city. We’ll be rounding up the
best of them for you
- source_sentence: How does Renée Zellweger's portrayal of Bridget Jones evolve in
"Mad About the Boy" compared to her earlier performances?
sentences:
- "From St Paddy’s to Mothering Sunday, Pancake Day to International Women’s Day, the\
\ third month of the year packs in a whole host of big celebrations. \nAnd it’s\
\ also an especially great month for culture vultures. There are a host of film\
\ festivals happening around the city, from BFI Flare and the inaugural London\
\ Soundtrack Festival to Kinoteka, Cinema Made in Italy and the Banff Mountain\
\ Film Festival. \nAnd there’s also Deptford Literature Festival, the Young Barbican\
\ Takeover Festival, music conference series AVA London and the Other Art Fair. \n\
Find out about all of these, and much more, in our roundup of the best things\
\ to do in London over the month."
- ‘Fourquels’ are usually where film franchises start to flirt with rock bottom,
so it’s a joy to report that Mad About the Boy is comfortably the best Bridget
Jones outing since Bridget Jones’s Diary. For Renée Zellweger’s still klutzy but
now wiser Bridge, living in cosy Hampstead, the singleton Borough era is a distant
memory. Ciggies and Chardonnay have been dispensed with replaced with a big dose
of lingering grief for lawyer Mark Darcy (Colin Firth). It says everything for
the script (co-written by Helen Fielding, Dan Mazer and Abi Morgan) that even
Daniel Cleaver, now entering his own Jurassic era and a bit sad about it, gets
an affecting arc here. The plot will surprise no one, but it barely matters –
this is Bridget’s journey of
- The Six Nations rugby tournament is back for 2025, taking over boozers, beer gardens
and outdoor screens across London most weekends up until Saturday March 15. And
you could just watch on your telly at home. But as the annual competition reaches
its final stages, you might prefer to catch every scrimmage, try and conversion
in a lively atmosphere with a nice freshly-poured Guinness in hand. So head to
one of the rugby pubs, bars, beer halls, markets and social clubs listed here,
where you’ll find free-flowing pints, special guest appearances and countless
renditions of ‘Swing Low, Sweet Chariot’.Read moreAdvertising11. Celebrate the
matriarchs in your life on Mother’s Day in LondonThings to doMums deserve high
praise all year round,
- source_sentence: Who is mentioned in relation to getting Guinnesses for the event?
sentences:
- 'you agree to our Terms of Use and Privacy Policy and consent to receive emails
from Time Out about news, events, offers and partner promotions.SubscribeSearchNewsThings
to DoFood & DrinkArtTheatreTravelHalf-TermOffersSeparatorKidsAttractionsMuseumsFilmMusicNightlifeHotelsLondonLondonNew
YorkParisChicagoLos AngelesLisbonHong KongSydneyMelbournePortoSingaporeBarcelonaMadridMontréalBostonMiamiWorldwideCloseNewsThings
to DoFood & DrinkArtTheatreTravelHalf-TermOffersMoreKidsAttractionsMuseumsFilmMusicNightlifeHotelsLondonLondonNew
YorkParisChicagoLos AngelesLisbonHong KongSydneyMelbournePortoSingaporeBarcelonaMadridMontréalBostonMiamiWorldwideSubscribeOffers
EnglishEnglishEspañolinstagramtiktokfacebooktwitteryoutubePhotograph: Steve Beech
/'
- Haddadi
- 'Shields returning.Read moreBuy ticket2. Get the Guinnesses in for St Patrick’s
Day in LondonThings to doPhotograph: Sandor Szmutko'
- source_sentence: What platforms are mentioned in the context for social media engagement?
sentences:
- out for your first newsletter in your inbox soon!instagramtiktokfacebooktwitteryoutubeAbout
usPress officeInvestor relationsOur awardsWork for Time OutEditorial guidelinesPrivacy
noticeDo not sell my informationCookie policyAccessibility statementTerms of useModern
slavery statementManage cookiesContact usGet ListedClaim your listingTime Out
Offers FAQAdvertisingTime Out MarketTime Out productsTime Out OffersTime Out WorldwideMoviesRestaurantsSite
Map© 2025 Time Out England Limited and affiliated companies owned by Time Out
Group Plc. All rights reserved. Time Out is a registered trademark of Time Out
Digital Limited.
- 'You’ve probably heard all about Versailles’ dazzling Hall of Mirrors and its
gorgeous, well-manicured gardens – maybe you’ve even seen them IRL. But do you
know about the role the French royal court played in not just spreading scientific
knowledge, but making it fashionable, too? The Science Museum’s latest exhibition,
‘Versailles: Science And Splendour’, will uncover that lesser-talked-about side
of the palace’s history, diving into the royal family’s relationship with science,
women’s impact on medicine, philosophy and botany at the royal court, and showcasing
more than 100 items that reinforce those stories – many of which have never been
displayed in the UK before.'
- 'Steve Beech / ShutterstockPhotograph: Steve Beech / ShutterstockLondon events
in March 2025Our guide to the best events, festivals, workshops, exhibitions and
things to do throughout March 2025 in LondonWednesday 12 February 2025ShareCopy
LinkFacebookTwitterPinterestEmailWhatsAppWritten by Rosie HewitsonThings to Do
Editor, LondonAdvertisingThe days are getting gradually lighter, the snowdrops
and crocuses have arrived in London’s park, and London’s cultural scene has burst
into life after a mid-winter lull. It can only mean one thing; March is right
around the corner.'
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.8846153846153846
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1.0
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8846153846153846
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.33333333333333337
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8846153846153846
name: Cosine Recall@1
- type: cosine_recall@3
value: 1.0
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9574149715659375
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9423076923076923
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9423076923076923
name: Cosine Map@100
---
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("ric9176/cjo-ft-v0")
# Run inference
sentences = [
'What platforms are mentioned in the context for social media engagement?',
'out for your first newsletter in your inbox soon!instagramtiktokfacebooktwitteryoutubeAbout usPress officeInvestor relationsOur awardsWork for Time OutEditorial guidelinesPrivacy noticeDo not sell my informationCookie policyAccessibility statementTerms of useModern slavery statementManage cookiesContact usGet ListedClaim your listingTime Out Offers FAQAdvertisingTime Out MarketTime Out productsTime Out OffersTime Out WorldwideMoviesRestaurantsSite Map© 2025 Time Out England Limited and affiliated companies owned by Time Out Group Plc. All rights reserved. Time Out is a registered trademark of Time Out Digital Limited.',
'Steve Beech / ShutterstockPhotograph: Steve Beech / ShutterstockLondon events in March 2025Our guide to the best events, festivals, workshops, exhibitions and things to do throughout March 2025 in LondonWednesday 12 February 2025ShareCopy LinkFacebookTwitterPinterestEmailWhatsAppWritten by Rosie HewitsonThings to Do Editor, LondonAdvertisingThe days are getting gradually lighter, the snowdrops and crocuses have arrived in London’s park, and London’s cultural scene has burst into life after a mid-winter lull. It can only mean one thing; March is right around the corner.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.8846 |
| cosine_accuracy@3 | 1.0 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.8846 |
| cosine_precision@3 | 0.3333 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.8846 |
| cosine_recall@3 | 1.0 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| **cosine_ndcg@10** | **0.9574** |
| cosine_mrr@10 | 0.9423 |
| cosine_map@100 | 0.9423 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 154 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 154 samples:
| | sentence_0 | sentence_1 |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 18.04 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 129.57 tokens</li><li>max: 226 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:-----------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What types of events and activities are highlighted for the weekend in London?</code> | <code>30 Wonderful Things To Do This Weekend in London – weekend events and activities in LondonGo to the contentGo to the footerNo thanksSubscribe🙌Awesome, you're subscribed!Thanks for subscribing! Look out for your first newsletter in your inbox soon!Get us in your inboxSign up to our newsletter for the latest and greatest from your city and beyondEnter email addressDéjà vu! We already have this email. Try another?By entering your email address you agree to our Terms of Use and Privacy Policy and consent to receive emails from Time Out about news, events, offers and partner promotions.No thanks Awesome, you're subscribed!Thanks for subscribing! Look out for your first newsletter in your inbox soon!The best of London for free.Sign up for</code> |
| <code>How can individuals stay updated on the latest happenings in London according to the context?</code> | <code>30 Wonderful Things To Do This Weekend in London – weekend events and activities in LondonGo to the contentGo to the footerNo thanksSubscribe🙌Awesome, you're subscribed!Thanks for subscribing! Look out for your first newsletter in your inbox soon!Get us in your inboxSign up to our newsletter for the latest and greatest from your city and beyondEnter email addressDéjà vu! We already have this email. Try another?By entering your email address you agree to our Terms of Use and Privacy Policy and consent to receive emails from Time Out about news, events, offers and partner promotions.No thanks Awesome, you're subscribed!Thanks for subscribing! Look out for your first newsletter in your inbox soon!The best of London for free.Sign up for</code> |
| <code>What benefits do subscribers receive by signing up for the email newsletter?</code> | <code>free.Sign up for our email to enjoy London without spending a thing (as well as some options when you’re feeling flush).Enter email addressDéjà vu! We already have this email. Try another?No thanksBy entering your email address you agree to our Terms of Use and Privacy Policy and consent to receive emails from Time Out about news, events, offers and partner promotions.No thanks Awesome, you're subscribed!Thanks for subscribing! Look out for your first newsletter in your inbox soon!Love the mag?Our newsletter hand-delivers the best bits to your inbox. Sign up to unlock our digital magazines and also receive the latest news, events, offers and partner promotions.Enter email addressDéjà vu! We already have this email. Try another?No</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `num_train_epochs`: 10
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | cosine_ndcg@10 |
|:-----:|:----:|:--------------:|
| 1.0 | 16 | 0.9213 |
| 2.0 | 32 | 0.9355 |
| 3.0 | 48 | 0.9290 |
| 3.125 | 50 | 0.9432 |
| 4.0 | 64 | 0.9574 |
| 5.0 | 80 | 0.9574 |
| 6.0 | 96 | 0.9574 |
| 6.25 | 100 | 0.9574 |
| 7.0 | 112 | 0.9574 |
| 8.0 | 128 | 0.9574 |
| 9.0 | 144 | 0.9574 |
| 9.375 | 150 | 0.9574 |
| 10.0 | 160 | 0.9574 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("ric9176/cjo-ft-v0")
# Run inference
sentences = [
'What platforms are mentioned in the context for social media engagement?',
'out for your first newsletter in your inbox soon!instagramtiktokfacebooktwitteryoutubeAbout usPress officeInvestor relationsOur awardsWork for Time OutEditorial guidelinesPrivacy noticeDo not sell my informationCookie policyAccessibility statementTerms of useModern slavery statementManage cookiesContact usGet ListedClaim your listingTime Out Offers FAQAdvertisingTime Out MarketTime Out productsTime Out OffersTime Out WorldwideMoviesRestaurantsSite Map© 2025 Time Out England Limited and affiliated companies owned by Time Out Group Plc. All rights reserved. Time Out is a registered trademark of Time Out Digital Limited.',
'Steve Beech / ShutterstockPhotograph: Steve Beech / ShutterstockLondon events in March 2025Our guide to the best events, festivals, workshops, exhibitions and things to do throughout March 2025 in LondonWednesday 12 February 2025ShareCopy LinkFacebookTwitterPinterestEmailWhatsAppWritten by Rosie HewitsonThings to Do Editor, LondonAdvertisingThe days are getting gradually lighter, the snowdrops and crocuses have arrived in London’s park, and London’s cultural scene has burst into life after a mid-winter lull. It can only mean one thing; March is right around the corner.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.8846 |
| cosine_accuracy@3 | 1.0 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.8846 |
| cosine_precision@3 | 0.3333 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.8846 |
| cosine_recall@3 | 1.0 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| **cosine_ndcg@10** | **0.9574** |
| cosine_mrr@10 | 0.9423 |
| cosine_map@100 | 0.9423 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 154 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 154 samples:
| | sentence_0 | sentence_1 |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 18.04 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 129.57 tokens</li><li>max: 226 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:-----------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What types of events and activities are highlighted for the weekend in London?</code> | <code>30 Wonderful Things To Do This Weekend in London – weekend events and activities in LondonGo to the contentGo to the footerNo thanksSubscribe🙌Awesome, you're subscribed!Thanks for subscribing! Look out for your first newsletter in your inbox soon!Get us in your inboxSign up to our newsletter for the latest and greatest from your city and beyondEnter email addressDéjà vu! We already have this email. Try another?By entering your email address you agree to our Terms of Use and Privacy Policy and consent to receive emails from Time Out about news, events, offers and partner promotions.No thanks Awesome, you're subscribed!Thanks for subscribing! Look out for your first newsletter in your inbox soon!The best of London for free.Sign up for</code> |
| <code>How can individuals stay updated on the latest happenings in London according to the context?</code> | <code>30 Wonderful Things To Do This Weekend in London – weekend events and activities in LondonGo to the contentGo to the footerNo thanksSubscribe🙌Awesome, you're subscribed!Thanks for subscribing! Look out for your first newsletter in your inbox soon!Get us in your inboxSign up to our newsletter for the latest and greatest from your city and beyondEnter email addressDéjà vu! We already have this email. Try another?By entering your email address you agree to our Terms of Use and Privacy Policy and consent to receive emails from Time Out about news, events, offers and partner promotions.No thanks Awesome, you're subscribed!Thanks for subscribing! Look out for your first newsletter in your inbox soon!The best of London for free.Sign up for</code> |
| <code>What benefits do subscribers receive by signing up for the email newsletter?</code> | <code>free.Sign up for our email to enjoy London without spending a thing (as well as some options when you’re feeling flush).Enter email addressDéjà vu! We already have this email. Try another?No thanksBy entering your email address you agree to our Terms of Use and Privacy Policy and consent to receive emails from Time Out about news, events, offers and partner promotions.No thanks Awesome, you're subscribed!Thanks for subscribing! Look out for your first newsletter in your inbox soon!Love the mag?Our newsletter hand-delivers the best bits to your inbox. Sign up to unlock our digital magazines and also receive the latest news, events, offers and partner promotions.Enter email addressDéjà vu! We already have this email. Try another?No</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `num_train_epochs`: 10
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | cosine_ndcg@10 |
|:-----:|:----:|:--------------:|
| 1.0 | 16 | 0.9213 |
| 2.0 | 32 | 0.9355 |
| 3.0 | 48 | 0.9290 |
| 3.125 | 50 | 0.9432 |
| 4.0 | 64 | 0.9574 |
| 5.0 | 80 | 0.9574 |
| 6.0 | 96 | 0.9574 |
| 6.25 | 100 | 0.9574 |
| 7.0 | 112 | 0.9574 |
| 8.0 | 128 | 0.9574 |
| 9.0 | 144 | 0.9574 |
| 9.375 | 150 | 0.9574 |
| 10.0 | 160 | 0.9574 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "Snowflake/snowflake-arctic-embed-l", "library_name": "sentence-transformers", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_recall@1", "cosine_recall@3", "cosine_recall@5", "cosine_recall@10", "cosine_ndcg@10", "cosine_mrr@10", "cosine_map@100"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:154", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "Who will be introducing the first and second Joker movies at the festival?", "sentences": ["13 Apr 2025Photo: Marshmallow Laser FeastSoil – it’s not something you really think about, unless you’re doing the gardening. But this new exhibition at Somerset House will change all that, shining a light on its important role in our world, including the part it plays in our planet’s future. Top artists, writers and scientists from across the globe are all involved in the thought-provoking exploration, which aims to stop you thinking of soil as mere dirt and start considering it as something far more powerful instead.Read moreBuy ticket24. Enjoy stunning views of the River Thames with three courses at Sea ContainersNiall Clutton", "favourite movies – the soundtracks. London Soundtrack Festival puts the scores front and centre in March 2025, with a series of screenings, talks and performances celebrating the musicians who make Hollywood sound so exciting, tense and emotional. Highlights include Hildur Guðnadóttir introducing the first and second Joker movies and, later in the programme, holding her own concert, David Cronenberg and Howard Shore in conversation, screenings of Charlie Chaplin’s Modern Times, The Silence of the Lambs and Eighth Grade with live scores, a day-long celebration of video game music at The Roundhouse ‘Great Movie Songs with Anne Dudley & Friends’ featuring guest appearances from the likes of the Pet Shop Boys’ Neil Tennant and Jake Shears of", "Peter Walker Sculptor and David Harper ComposerSt Paul’s is about to get lit. In February, the cathedral will be transformed via a stunning immersive light and sound show. ‘Luminous’ by art collective Luxmuralis will animate the interior of the building with illuminations and soundscapes inspired by its history, collections and archives. Previously, Luxmuralis has created shows at Westminster Abbey, Durham Cathedral and Oxford University. The company was also behind the ‘Poppy Fields’ display at the Tower of London in October."]}, {"source_sentence": "What is the significance of Haddadi in the given context?", "sentences": ["It’s been almost a decade since Red Bull Culture Clash last took place in London, but finally, it’s making its return in 2025, The epic music battle, inspired by Jamaican sound clash culture, will see four crews armed with their finest dubplates go head-to-head, delivering the best of the electronic, UK rap, Afro, and Caribbean music scenes. Only one can be crowned the winner, though, and take home the Red Bull Culture Clash trophy, with the victor. The likes of Boy Better Know, A$AP Mob and Rebel Sound have previously competed at the legendary competition, as well as special guests like J Hus, Stormzy, and Ice Kid, so crowds can expect some pretty special things from its return, which takes place at Drumsheds in March. Read moreBuy", "Haddadi", "The Irish really know how to celebrate, so when it comes to St Patrick’s Day in London, the city’s Irish community has no problem showing us how it’s done. A day to celebrate the patron saint of Ireland, the occasion is always one big welcoming bash. Expect lots of dancing, hearty traditional dishes, a huge parade and as many pints as you can handle. The Mayor of London’s annual St Patrick’s Day Festival celebration will take place on Sunday March 16 – a day ahead of the official holiday – and, as usual, thousands of revellers are expected to watch the parade wend its way through central London, while there’ll also be plenty more St Patrick’s Day parties and events to check out around the city. We’ll be rounding up the best of them for you"]}, {"source_sentence": "How does Renée Zellweger's portrayal of Bridget Jones evolve in \"Mad About the Boy\" compared to her earlier performances?", "sentences": ["From St Paddy’s to Mothering Sunday, Pancake Day to International Women’s Day, the third month of the year packs in a whole host of big celebrations. \nAnd it’s also an especially great month for culture vultures. There are a host of film festivals happening around the city, from BFI Flare and the inaugural London Soundtrack Festival to Kinoteka, Cinema Made in Italy and the Banff Mountain Film Festival. \nAnd there’s also Deptford Literature Festival, the Young Barbican Takeover Festival, music conference series AVA London and the Other Art Fair. \nFind out about all of these, and much more, in our roundup of the best things to do in London over the month.", "‘Fourquels’ are usually where film franchises start to flirt with rock bottom, so it’s a joy to report that Mad About the Boy is comfortably the best Bridget Jones outing since Bridget Jones’s Diary. For Renée Zellweger’s still klutzy but now wiser Bridge, living in cosy Hampstead, the singleton Borough era is a distant memory. Ciggies and Chardonnay have been dispensed with replaced with a big dose of lingering grief for lawyer Mark Darcy (Colin Firth). It says everything for the script (co-written by Helen Fielding, Dan Mazer and Abi Morgan) that even Daniel Cleaver, now entering his own Jurassic era and a bit sad about it, gets an affecting arc here. The plot will surprise no one, but it barely matters – this is Bridget’s journey of", "The Six Nations rugby tournament is back for 2025, taking over boozers, beer gardens and outdoor screens across London most weekends up until Saturday March 15. And you could just watch on your telly at home. But as the annual competition reaches its final stages, you might prefer to catch every scrimmage, try and conversion in a lively atmosphere with a nice freshly-poured Guinness in hand. So head to one of the rugby pubs, bars, beer halls, markets and social clubs listed here, where you’ll find free-flowing pints, special guest appearances and countless renditions of ‘Swing Low, Sweet Chariot’.Read moreAdvertising11. Celebrate the matriarchs in your life on Mother’s Day in LondonThings to doMums deserve high praise all year round,"]}, {"source_sentence": "Who is mentioned in relation to getting Guinnesses for the event?", "sentences": ["you agree to our Terms of Use and Privacy Policy and consent to receive emails from Time Out about news, events, offers and partner promotions.SubscribeSearchNewsThings to DoFood & DrinkArtTheatreTravelHalf-TermOffersSeparatorKidsAttractionsMuseumsFilmMusicNightlifeHotelsLondonLondonNew YorkParisChicagoLos AngelesLisbonHong KongSydneyMelbournePortoSingaporeBarcelonaMadridMontréalBostonMiamiWorldwideCloseNewsThings to DoFood & DrinkArtTheatreTravelHalf-TermOffersMoreKidsAttractionsMuseumsFilmMusicNightlifeHotelsLondonLondonNew YorkParisChicagoLos AngelesLisbonHong KongSydneyMelbournePortoSingaporeBarcelonaMadridMontréalBostonMiamiWorldwideSubscribeOffers EnglishEnglishEspañolinstagramtiktokfacebooktwitteryoutubePhotograph: Steve Beech /", "Haddadi", "Shields returning.Read moreBuy ticket2. Get the Guinnesses in for St Patrick’s Day in LondonThings to doPhotograph: Sandor Szmutko"]}, {"source_sentence": "What platforms are mentioned in the context for social media engagement?", "sentences": ["out for your first newsletter in your inbox soon!instagramtiktokfacebooktwitteryoutubeAbout usPress officeInvestor relationsOur awardsWork for Time OutEditorial guidelinesPrivacy noticeDo not sell my informationCookie policyAccessibility statementTerms of useModern slavery statementManage cookiesContact usGet ListedClaim your listingTime Out Offers FAQAdvertisingTime Out MarketTime Out productsTime Out OffersTime Out WorldwideMoviesRestaurantsSite Map© 2025 Time Out England Limited and affiliated companies owned by Time Out Group Plc. All rights reserved. Time Out is a registered trademark of Time Out Digital Limited.", "You’ve probably heard all about Versailles’ dazzling Hall of Mirrors and its gorgeous, well-manicured gardens – maybe you’ve even seen them IRL. But do you know about the role the French royal court played in not just spreading scientific knowledge, but making it fashionable, too? The Science Museum’s latest exhibition, ‘Versailles: Science And Splendour’, will uncover that lesser-talked-about side of the palace’s history, diving into the royal family’s relationship with science, women’s impact on medicine, philosophy and botany at the royal court, and showcasing more than 100 items that reinforce those stories – many of which have never been displayed in the UK before.", "Steve Beech / ShutterstockPhotograph: Steve Beech / ShutterstockLondon events in March 2025Our guide to the best events, festivals, workshops, exhibitions and things to do throughout March 2025 in LondonWednesday 12 February 2025ShareCopy LinkFacebookTwitterPinterestEmailWhatsAppWritten by Rosie HewitsonThings to Do Editor, LondonAdvertisingThe days are getting gradually lighter, the snowdrops and crocuses have arrived in London’s park, and London’s cultural scene has burst into life after a mid-winter lull. It can only mean one thing; March is right around the corner."]}], "model-index": [{"name": "SentenceTransformer based on Snowflake/snowflake-arctic-embed-l", "results": [{"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "Unknown", "type": "unknown"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.8846153846153846, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 1.0, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 1.0, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 1.0, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.8846153846153846, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.33333333333333337, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.20000000000000004, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.10000000000000002, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.8846153846153846, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 1.0, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 1.0, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 1.0, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.9574149715659375, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.9423076923076923, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.9423076923076923, "name": "Cosine Map@100"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,489 |
next-tat/tat-llm-70b-fft
|
next-tat
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2401.13223",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-02-21T03:31:45Z |
2024-02-23T12:39:40+00:00
| 4 | 0 |
---
language:
- en
license: llama2
---
# TAT-LLM: A Specialized Language Model for Discrete Reasoning over Tabular and Textual Data
Paper: https://arxiv.org/abs/2401.13223
Code: https://github.com/fengbinzhu/TAT-LLM
## Introduction
We present TAT-LLM, a specialized language model crafted through the innovative Step-wise Pipeline approach, focusing on the nuanced realm of tabular and textual question answering (QA). This model is the fruit of rigorously fine-tuning the LLaMA 2 architecture with a novel dataset, autonomously generated from expertly annotated resources. TAT-LLM stands at the intersection of tabular comprehension and textual analysis, engineered to excel by embodying three fundamental phases: Extraction, Reasoning, and Execution. Our empirical findings illuminate TAT-LLM's remarkable capability to eclipse traditional benchmarks, surmounting even the most advanced models and colossal language models such as GPT-4 across a suite of demanding financial QA tasks like FinQA, TAT-QA, and TAT-DQA. This endeavor not only sets a new standard for task-specific language models but also paves the way for future explorations in optimizing smaller models for highly specialized functions.
| Model | Size | FINQA | TATQA | TATDQA |
| --- | --- | --- | --- | --- |
| GPT-3.5-Turbo | - | 58.00 | 59.47 | 52.74 |
| GPT-4 | - | 63.91 | 71.92 | 64.46 |
| [TAT-LLM-7B-LORA](https://huggingface.co/next-tat/tat-llm-7b-lora) | 7B | 65.13 | 76.49 | 71.38 |
| [TAT-LLM-7B-FFT](https://huggingface.co/next-tat/tat-llm-7b-fft) | 7B | 69.75 | 76.91 | 72.64 |
| [TAT-LLM-13B-LORA](https://huggingface.co/next-tat/tat-llm-13b-lora) | 13B | 71.93 | 77.51 | 72.22 |
| [TAT-LLM-13B-FFT](https://huggingface.co/next-tat/tat-llm-13b-fft) | 13B | 72.97 | 78.41 | 73.18 |
| [TAT-LLM-70B-LORA](https://huggingface.co/next-tat/tat-llm-70b-lora) | 70B | **76.81** | 81.42 | 76.55 |
| [TAT-LLM-70B-FFT](https://huggingface.co/next-tat/tat-llm-70b-fft) | 70B | 76.11 | **82.20** | **76.97** |
## Training
We train our TAT-LLM model in various sizes, including 7B, 13B, and 70B, using different methods such as parameter-efficient fine-tuning and full-parameter fine-tuning of LLaMA 2 on a combination of financial data from the FinQA, TAT-QA, and TAT-DQA training sets([🤗HuggingFace Repo](https://huggingface.co/datasets/next-tat/tat-llm-instructions)). To refine accuracy, we introduce an External Executor, enhancing the model by processing intermediate outputs to derive conclusive answers. Please refer to the [paper](https://arxiv.org/abs/2401.13223) for more details.
## Inference & Evaluation
Please refer to code [here](https://github.com/fengbinzhu/TAT-LLM)
## Citation
If you find this model helpful, please consider citing our paper:
```
@misc{zhu2024tatllm,
title={TAT-LLM: A Specialized Language Model for Discrete Reasoning over Tabular and Textual Data},
author={Fengbin Zhu and Ziyang Liu and Fuli Feng and Chao Wang and Moxin Li and Tat-Seng Chua},
year={2024},
eprint={2401.13223},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| null |
Non_BioNLP
|
# TAT-LLM: A Specialized Language Model for Discrete Reasoning over Tabular and Textual Data
Paper: https://arxiv.org/abs/2401.13223
Code: https://github.com/fengbinzhu/TAT-LLM
## Introduction
We present TAT-LLM, a specialized language model crafted through the innovative Step-wise Pipeline approach, focusing on the nuanced realm of tabular and textual question answering (QA). This model is the fruit of rigorously fine-tuning the LLaMA 2 architecture with a novel dataset, autonomously generated from expertly annotated resources. TAT-LLM stands at the intersection of tabular comprehension and textual analysis, engineered to excel by embodying three fundamental phases: Extraction, Reasoning, and Execution. Our empirical findings illuminate TAT-LLM's remarkable capability to eclipse traditional benchmarks, surmounting even the most advanced models and colossal language models such as GPT-4 across a suite of demanding financial QA tasks like FinQA, TAT-QA, and TAT-DQA. This endeavor not only sets a new standard for task-specific language models but also paves the way for future explorations in optimizing smaller models for highly specialized functions.
| Model | Size | FINQA | TATQA | TATDQA |
| --- | --- | --- | --- | --- |
| GPT-3.5-Turbo | - | 58.00 | 59.47 | 52.74 |
| GPT-4 | - | 63.91 | 71.92 | 64.46 |
| [TAT-LLM-7B-LORA](https://huggingface.co/next-tat/tat-llm-7b-lora) | 7B | 65.13 | 76.49 | 71.38 |
| [TAT-LLM-7B-FFT](https://huggingface.co/next-tat/tat-llm-7b-fft) | 7B | 69.75 | 76.91 | 72.64 |
| [TAT-LLM-13B-LORA](https://huggingface.co/next-tat/tat-llm-13b-lora) | 13B | 71.93 | 77.51 | 72.22 |
| [TAT-LLM-13B-FFT](https://huggingface.co/next-tat/tat-llm-13b-fft) | 13B | 72.97 | 78.41 | 73.18 |
| [TAT-LLM-70B-LORA](https://huggingface.co/next-tat/tat-llm-70b-lora) | 70B | **76.81** | 81.42 | 76.55 |
| [TAT-LLM-70B-FFT](https://huggingface.co/next-tat/tat-llm-70b-fft) | 70B | 76.11 | **82.20** | **76.97** |
## Training
We train our TAT-LLM model in various sizes, including 7B, 13B, and 70B, using different methods such as parameter-efficient fine-tuning and full-parameter fine-tuning of LLaMA 2 on a combination of financial data from the FinQA, TAT-QA, and TAT-DQA training sets([🤗HuggingFace Repo](https://huggingface.co/datasets/next-tat/tat-llm-instructions)). To refine accuracy, we introduce an External Executor, enhancing the model by processing intermediate outputs to derive conclusive answers. Please refer to the [paper](https://arxiv.org/abs/2401.13223) for more details.
## Inference & Evaluation
Please refer to code [here](https://github.com/fengbinzhu/TAT-LLM)
## Citation
If you find this model helpful, please consider citing our paper:
```
@misc{zhu2024tatllm,
title={TAT-LLM: A Specialized Language Model for Discrete Reasoning over Tabular and Textual Data},
author={Fengbin Zhu and Ziyang Liu and Fuli Feng and Chao Wang and Moxin Li and Tat-Seng Chua},
year={2024},
eprint={2401.13223},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": ["en"], "license": "llama2"}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,491 |
gokulsrinivasagan/bert_uncased_L-2_H-128_A-2_wnli
|
gokulsrinivasagan
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:google/bert_uncased_L-2_H-128_A-2",
"base_model:finetune:google/bert_uncased_L-2_H-128_A-2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-12-04T21:08:56Z |
2024-12-04T21:09:05+00:00
| 5 | 0 |
---
base_model: google/bert_uncased_L-2_H-128_A-2
datasets:
- glue
language:
- en
library_name: transformers
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: bert_uncased_L-2_H-128_A-2_wnli
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE WNLI
type: glue
args: wnli
metrics:
- type: accuracy
value: 0.4647887323943662
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_uncased_L-2_H-128_A-2_wnli
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6972
- Accuracy: 0.4648
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6962 | 1.0 | 3 | 0.6984 | 0.4507 |
| 0.6909 | 2.0 | 6 | 0.6972 | 0.4648 |
| 0.6962 | 3.0 | 9 | 0.6980 | 0.4507 |
| 0.6922 | 4.0 | 12 | 0.6989 | 0.4507 |
| 0.6926 | 5.0 | 15 | 0.6998 | 0.4366 |
| 0.6907 | 6.0 | 18 | 0.7011 | 0.4085 |
| 0.6946 | 7.0 | 21 | 0.7024 | 0.4085 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_uncased_L-2_H-128_A-2_wnli
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6972
- Accuracy: 0.4648
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6962 | 1.0 | 3 | 0.6984 | 0.4507 |
| 0.6909 | 2.0 | 6 | 0.6972 | 0.4648 |
| 0.6962 | 3.0 | 9 | 0.6980 | 0.4507 |
| 0.6922 | 4.0 | 12 | 0.6989 | 0.4507 |
| 0.6926 | 5.0 | 15 | 0.6998 | 0.4366 |
| 0.6907 | 6.0 | 18 | 0.7011 | 0.4085 |
| 0.6946 | 7.0 | 21 | 0.7024 | 0.4085 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
|
{"base_model": "google/bert_uncased_L-2_H-128_A-2", "datasets": ["glue"], "language": ["en"], "library_name": "transformers", "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert_uncased_L-2_H-128_A-2_wnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE WNLI", "type": "glue", "args": "wnli"}, "metrics": [{"type": "accuracy", "value": 0.4647887323943662, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,492 |
hopkins/mbart-finetuned-eng-ind-47
|
hopkins
|
translation
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-07-03T03:41:18Z |
2023-07-03T03:59:13+00:00
| 10 | 0 |
---
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: mbart-finetuned-eng-ind-47
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-47
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7657
- Bleu: 21.8229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-47
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7657
- Bleu: 21.8229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
{"metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "mbart-finetuned-eng-ind-47", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 42,493 |
TheBloke/ALMA-7B-Pretrain-AWQ
|
TheBloke
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2309.11674",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] | 2023-09-22T23:48:24Z |
2023-11-09T18:18:13+00:00
| 8 | 2 |
---
base_model: haoranxu/ALMA-7b-Pretrain
license: mit
model_name: ALMA 7B Pretrain
inference: false
model_creator: haoranxu
model_type: llama
prompt_template: 'Translate this from Chinese to English:
Chinese: {prompt}
English:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# ALMA 7B Pretrain - AWQ
- Model creator: [haoranxu](https://huggingface.co/haoranxu)
- Original model: [ALMA 7B Pretrain](https://huggingface.co/haoranxu/ALMA-7b-Pretrain)
<!-- description start -->
## Description
This repo contains AWQ model files for [haoranxu's ALMA 7B Pretrain](https://huggingface.co/haoranxu/ALMA-7b-Pretrain).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference.
It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of AWQ models for high-throughput concurrent inference in multi-user server scenarios. Note that, at the time of writing, overall throughput is still lower than running vLLM with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/ALMA-7B-Pretrain-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/ALMA-7B-Pretrain-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/ALMA-7B-Pretrain-GGUF)
* [haoranxu's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/haoranxu/ALMA-7b-Pretrain)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ALMA
```
Translate this from Chinese to English:
Chinese: {prompt}
English:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `mit`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [haoranxu's ALMA 7B Pretrain](https://huggingface.co/haoranxu/ALMA-7b-Pretrain).
<!-- licensing end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files and AWQ parameters
For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/ALMA-7B-Pretrain-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 3.89 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Serving this model from vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- When using vLLM as a server, pass the `--quantization awq` parameter, for example:
```shell
python3 python -m vllm.entrypoints.api_server --model TheBloke/ALMA-7B-Pretrain-AWQ --quantization awq --dtype half
```
When using vLLM from Python code, pass the `quantization=awq` parameter, for example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/ALMA-7B-Pretrain-AWQ", quantization="awq", dtype="half")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-python start -->
## How to use this AWQ model from Python code
### Install the necessary packages
Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.0.2 or later
```shell
pip3 install autoawq
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### You can then try the following example code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_name_or_path = "TheBloke/ALMA-7B-Pretrain-AWQ"
# Load model
model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
trust_remote_code=False, safetensors=True)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False)
prompt = "Tell me about AI"
prompt_template=f'''Translate this from Chinese to English:
Chinese: {prompt}
English:
'''
print("\n\n*** Generate:")
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
# Generate output
generation_output = model.generate(
tokens,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
max_new_tokens=512
)
print("Output: ", tokenizer.decode(generation_output[0]))
# Inference can also be done using transformers' pipeline
from transformers import pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), and [vLLM](https://github.com/vllm-project/vllm).
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is not yet compatible with AWQ, but a PR is open which should bring support soon: [TGI PR #781](https://github.com/huggingface/text-generation-inference/issues/781).
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: haoranxu's ALMA 7B Pretrain
**ALMA** (**A**dvanced **L**anguage **M**odel-based tr**A**nslator) is an LLM-based translation model, which adopts a new translation model paradigm: it begins with fine-tuning on monolingual data and is further optimized using high-quality parallel data. This two-step fine-tuning process ensures strong translation performance.
Please find more details in our [paper](https://arxiv.org/abs/2309.11674).
```
@misc{xu2023paradigm,
title={A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models},
author={Haoran Xu and Young Jin Kim and Amr Sharaf and Hany Hassan Awadalla},
year={2023},
eprint={2309.11674},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
We release four translation models presented in the paper:
- **ALMA-7B**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **Full-weight** fine-tune on human-written parallel data
- **ALMA-7B-LoRA**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **LoRA** fine-tune on human-written parallel data
- **ALMA-13B**: Full-weight Fine-tune LLaMA-2-7B on 12B monolingual tokens and then **Full-weight** fine-tune on human-written parallel data
- **ALMA-13B-LoRA** (Our best system): Full-weight Fine-tune LLaMA-2-7B on 12B monolingual tokens and then **LoRA** fine-tune on human-written parallel data
Model checkpoints are released at huggingface:
| Models | Base Model Link | LoRA Link |
|:-------------:|:---------------:|:---------:|
| ALMA-7B | [haoranxu/ALMA-7B](https://huggingface.co/haoranxu/ALMA-7B) | - |
| ALMA-7B-LoRA | [haoranxu/ALMA-7B-Pretrain](https://huggingface.co/haoranxu/ALMA-7B-Pretrain) | [haoranxu/ALMA-7B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-7B-Pretrain-LoRA) |
| ALMA-13B | [haoranxu/ALMA-13B](https://huggingface.co/haoranxu/ALMA-13B) | - |
| ALMA-13B-LoRA | [haoranxu/ALMA-13B-Pretrain](https://huggingface.co/haoranxu/ALMA-13B-Pretrain) | [haoranxu/ALMA-13B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-13B-Pretrain-LoRA) |
Note that Base Model Link for `*-LoRA` models are LLaMA-2 fine-tuned by monolingual data (20B for the 7B model and 12B for the 13B model)
A quick start to use our best system (ALMA-13B-LoRA) for translation. An example of translating "我爱机器翻译。" into English:
```
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM
from transformers import LlamaTokenizer
# Load base model and LoRA weights
model = AutoModelForCausalLM.from_pretrained("haoranxu/ALMA-13B-Pretrain", torch_dtype=torch.float16, device_map="auto")
model = PeftModel.from_pretrained(model, "haoranxu/ALMA-13B-Pretrain-LoRA")
tokenizer = LlamaTokenizer.from_pretrained("haoranxu/ALMA-13B-Pretrain", padding_side='left')
# Add the source setence into the prompt template
prompt="Translate this from Chinese to English:\nChinese: 我爱机器翻译。\nEnglish:"
input_ids = tokenizer(prompt, return_tensors="pt", padding=True, max_length=40, truncation=True).input_ids.cuda()
# Translation
with torch.no_grad():
generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9)
outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print(outputs)
```
Please find more details in our [GitHub repository](https://github.com/fe1ixxu/ALMA)
| null |
Non_BioNLP
|
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# ALMA 7B Pretrain - AWQ
- Model creator: [haoranxu](https://huggingface.co/haoranxu)
- Original model: [ALMA 7B Pretrain](https://huggingface.co/haoranxu/ALMA-7b-Pretrain)
<!-- description start -->
## Description
This repo contains AWQ model files for [haoranxu's ALMA 7B Pretrain](https://huggingface.co/haoranxu/ALMA-7b-Pretrain).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference.
It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of AWQ models for high-throughput concurrent inference in multi-user server scenarios. Note that, at the time of writing, overall throughput is still lower than running vLLM with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/ALMA-7B-Pretrain-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/ALMA-7B-Pretrain-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/ALMA-7B-Pretrain-GGUF)
* [haoranxu's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/haoranxu/ALMA-7b-Pretrain)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ALMA
```
Translate this from Chinese to English:
Chinese: {prompt}
English:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `mit`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [haoranxu's ALMA 7B Pretrain](https://huggingface.co/haoranxu/ALMA-7b-Pretrain).
<!-- licensing end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files and AWQ parameters
For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/ALMA-7B-Pretrain-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 3.89 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Serving this model from vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- When using vLLM as a server, pass the `--quantization awq` parameter, for example:
```shell
python3 python -m vllm.entrypoints.api_server --model TheBloke/ALMA-7B-Pretrain-AWQ --quantization awq --dtype half
```
When using vLLM from Python code, pass the `quantization=awq` parameter, for example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/ALMA-7B-Pretrain-AWQ", quantization="awq", dtype="half")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-python start -->
## How to use this AWQ model from Python code
### Install the necessary packages
Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.0.2 or later
```shell
pip3 install autoawq
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### You can then try the following example code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_name_or_path = "TheBloke/ALMA-7B-Pretrain-AWQ"
# Load model
model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
trust_remote_code=False, safetensors=True)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False)
prompt = "Tell me about AI"
prompt_template=f'''Translate this from Chinese to English:
Chinese: {prompt}
English:
'''
print("\n\n*** Generate:")
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
# Generate output
generation_output = model.generate(
tokens,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
max_new_tokens=512
)
print("Output: ", tokenizer.decode(generation_output[0]))
# Inference can also be done using transformers' pipeline
from transformers import pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), and [vLLM](https://github.com/vllm-project/vllm).
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is not yet compatible with AWQ, but a PR is open which should bring support soon: [TGI PR #781](https://github.com/huggingface/text-generation-inference/issues/781).
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: haoranxu's ALMA 7B Pretrain
**ALMA** (**A**dvanced **L**anguage **M**odel-based tr**A**nslator) is an LLM-based translation model, which adopts a new translation model paradigm: it begins with fine-tuning on monolingual data and is further optimized using high-quality parallel data. This two-step fine-tuning process ensures strong translation performance.
Please find more details in our [paper](https://arxiv.org/abs/2309.11674).
```
@misc{xu2023paradigm,
title={A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models},
author={Haoran Xu and Young Jin Kim and Amr Sharaf and Hany Hassan Awadalla},
year={2023},
eprint={2309.11674},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
We release four translation models presented in the paper:
- **ALMA-7B**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **Full-weight** fine-tune on human-written parallel data
- **ALMA-7B-LoRA**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **LoRA** fine-tune on human-written parallel data
- **ALMA-13B**: Full-weight Fine-tune LLaMA-2-7B on 12B monolingual tokens and then **Full-weight** fine-tune on human-written parallel data
- **ALMA-13B-LoRA** (Our best system): Full-weight Fine-tune LLaMA-2-7B on 12B monolingual tokens and then **LoRA** fine-tune on human-written parallel data
Model checkpoints are released at huggingface:
| Models | Base Model Link | LoRA Link |
|:-------------:|:---------------:|:---------:|
| ALMA-7B | [haoranxu/ALMA-7B](https://huggingface.co/haoranxu/ALMA-7B) | - |
| ALMA-7B-LoRA | [haoranxu/ALMA-7B-Pretrain](https://huggingface.co/haoranxu/ALMA-7B-Pretrain) | [haoranxu/ALMA-7B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-7B-Pretrain-LoRA) |
| ALMA-13B | [haoranxu/ALMA-13B](https://huggingface.co/haoranxu/ALMA-13B) | - |
| ALMA-13B-LoRA | [haoranxu/ALMA-13B-Pretrain](https://huggingface.co/haoranxu/ALMA-13B-Pretrain) | [haoranxu/ALMA-13B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-13B-Pretrain-LoRA) |
Note that Base Model Link for `*-LoRA` models are LLaMA-2 fine-tuned by monolingual data (20B for the 7B model and 12B for the 13B model)
A quick start to use our best system (ALMA-13B-LoRA) for translation. An example of translating "我爱机器翻译。" into English:
```
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM
from transformers import LlamaTokenizer
# Load base model and LoRA weights
model = AutoModelForCausalLM.from_pretrained("haoranxu/ALMA-13B-Pretrain", torch_dtype=torch.float16, device_map="auto")
model = PeftModel.from_pretrained(model, "haoranxu/ALMA-13B-Pretrain-LoRA")
tokenizer = LlamaTokenizer.from_pretrained("haoranxu/ALMA-13B-Pretrain", padding_side='left')
# Add the source setence into the prompt template
prompt="Translate this from Chinese to English:\nChinese: 我爱机器翻译。\nEnglish:"
input_ids = tokenizer(prompt, return_tensors="pt", padding=True, max_length=40, truncation=True).input_ids.cuda()
# Translation
with torch.no_grad():
generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9)
outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print(outputs)
```
Please find more details in our [GitHub repository](https://github.com/fe1ixxu/ALMA)
|
{"base_model": "haoranxu/ALMA-7b-Pretrain", "license": "mit", "model_name": "ALMA 7B Pretrain", "inference": false, "model_creator": "haoranxu", "model_type": "llama", "prompt_template": "Translate this from Chinese to English:\nChinese: {prompt}\nEnglish:\n", "quantized_by": "TheBloke"}
|
task
|
[
"TRANSLATION"
] | 42,494 |
HPLT/hplt_bert_base_ar
|
HPLT
|
fill-mask
|
[
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"ar",
"dataset:HPLT/hplt_monolingual_v1_2",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | 2024-04-22T01:11:15Z |
2024-11-24T19:13:35+00:00
| 26 | 1 |
---
datasets:
- HPLT/hplt_monolingual_v1_2
language:
- ar
license: apache-2.0
tags:
- BERT
- HPLT
- encoder
inference: false
---
# HPLT Bert for Arabic
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language model. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_ar")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_ar", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Intermediate checkpoints
We are releasing 10 intermediate checkpoints for each model at intervals of every 3125 training steps in separate branches. The naming convention is `stepXXX`: for example, `step18750`.
You can load a specific model revision with `transformers` using the argument `revision`:
```python
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_ar", revision="step21875", trust_remote_code=True)
```
You can access all the revisions for the models with the following code:
```python
from huggingface_hub import list_repo_refs
out = list_repo_refs("HPLT/hplt_bert_base_ar")
print([b.name for b in out.branches])
```
## Cite us
```bibtex
@inproceedings{samuel-etal-2023-trained,
title = "Trained on 100 million words and still in shape: {BERT} meets {B}ritish {N}ational {C}orpus",
author = "Samuel, David and
Kutuzov, Andrey and
{\O}vrelid, Lilja and
Velldal, Erik",
editor = "Vlachos, Andreas and
Augenstein, Isabelle",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2023",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-eacl.146",
doi = "10.18653/v1/2023.findings-eacl.146",
pages = "1954--1974"
})
```
```bibtex
@inproceedings{de-gibert-etal-2024-new-massive,
title = "A New Massive Multilingual Dataset for High-Performance Language Technologies",
author = {de Gibert, Ona and
Nail, Graeme and
Arefyev, Nikolay and
Ba{\~n}{\'o}n, Marta and
van der Linde, Jelmer and
Ji, Shaoxiong and
Zaragoza-Bernabeu, Jaume and
Aulamo, Mikko and
Ram{\'\i}rez-S{\'a}nchez, Gema and
Kutuzov, Andrey and
Pyysalo, Sampo and
Oepen, Stephan and
Tiedemann, J{\"o}rg},
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.100",
pages = "1116--1128",
abstract = "We present the HPLT (High Performance Language Technologies) language resources, a new massive multilingual dataset including both monolingual and bilingual corpora extracted from CommonCrawl and previously unused web crawls from the Internet Archive. We describe our methods for data acquisition, management and processing of large corpora, which rely on open-source software tools and high-performance computing. Our monolingual collection focuses on low- to medium-resourced languages and covers 75 languages and a total of {\mbox{$\approx$}} 5.6 trillion word tokens de-duplicated on the document level. Our English-centric parallel corpus is derived from its monolingual counterpart and covers 18 language pairs and more than 96 million aligned sentence pairs with roughly 1.4 billion English tokens. The HPLT language resources are one of the largest open text corpora ever released, providing a great resource for language modeling and machine translation training. We publicly release the corpora, the software, and the tools used in this work.",
}
```
| null |
Non_BioNLP
|
# HPLT Bert for Arabic
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language model. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_ar")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_ar", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Intermediate checkpoints
We are releasing 10 intermediate checkpoints for each model at intervals of every 3125 training steps in separate branches. The naming convention is `stepXXX`: for example, `step18750`.
You can load a specific model revision with `transformers` using the argument `revision`:
```python
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_ar", revision="step21875", trust_remote_code=True)
```
You can access all the revisions for the models with the following code:
```python
from huggingface_hub import list_repo_refs
out = list_repo_refs("HPLT/hplt_bert_base_ar")
print([b.name for b in out.branches])
```
## Cite us
```bibtex
@inproceedings{samuel-etal-2023-trained,
title = "Trained on 100 million words and still in shape: {BERT} meets {B}ritish {N}ational {C}orpus",
author = "Samuel, David and
Kutuzov, Andrey and
{\O}vrelid, Lilja and
Velldal, Erik",
editor = "Vlachos, Andreas and
Augenstein, Isabelle",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2023",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-eacl.146",
doi = "10.18653/v1/2023.findings-eacl.146",
pages = "1954--1974"
})
```
```bibtex
@inproceedings{de-gibert-etal-2024-new-massive,
title = "A New Massive Multilingual Dataset for High-Performance Language Technologies",
author = {de Gibert, Ona and
Nail, Graeme and
Arefyev, Nikolay and
Ba{\~n}{\'o}n, Marta and
van der Linde, Jelmer and
Ji, Shaoxiong and
Zaragoza-Bernabeu, Jaume and
Aulamo, Mikko and
Ram{\'\i}rez-S{\'a}nchez, Gema and
Kutuzov, Andrey and
Pyysalo, Sampo and
Oepen, Stephan and
Tiedemann, J{\"o}rg},
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.100",
pages = "1116--1128",
abstract = "We present the HPLT (High Performance Language Technologies) language resources, a new massive multilingual dataset including both monolingual and bilingual corpora extracted from CommonCrawl and previously unused web crawls from the Internet Archive. We describe our methods for data acquisition, management and processing of large corpora, which rely on open-source software tools and high-performance computing. Our monolingual collection focuses on low- to medium-resourced languages and covers 75 languages and a total of {\mbox{$\approx$}} 5.6 trillion word tokens de-duplicated on the document level. Our English-centric parallel corpus is derived from its monolingual counterpart and covers 18 language pairs and more than 96 million aligned sentence pairs with roughly 1.4 billion English tokens. The HPLT language resources are one of the largest open text corpora ever released, providing a great resource for language modeling and machine translation training. We publicly release the corpora, the software, and the tools used in this work.",
}
```
|
{"datasets": ["HPLT/hplt_monolingual_v1_2"], "language": ["ar"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "inference": false}
|
task
|
[
"TRANSLATION"
] | 42,495 |
gokulsrinivasagan/distilbert_lda_100_v1_book_rte
|
gokulsrinivasagan
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokulsrinivasagan/distilbert_lda_100_v1_book",
"base_model:finetune:gokulsrinivasagan/distilbert_lda_100_v1_book",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-12-09T18:02:33Z |
2024-12-09T18:03:29+00:00
| 4 | 0 |
---
base_model: gokulsrinivasagan/distilbert_lda_100_v1_book
datasets:
- glue
language:
- en
library_name: transformers
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert_lda_100_v1_book_rte
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE RTE
type: glue
args: rte
metrics:
- type: accuracy
value: 0.5270758122743683
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_lda_100_v1_book_rte
This model is a fine-tuned version of [gokulsrinivasagan/distilbert_lda_100_v1_book](https://huggingface.co/gokulsrinivasagan/distilbert_lda_100_v1_book) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6902
- Accuracy: 0.5271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7006 | 1.0 | 10 | 0.6935 | 0.4765 |
| 0.6908 | 2.0 | 20 | 0.6908 | 0.5199 |
| 0.6809 | 3.0 | 30 | 0.6902 | 0.5271 |
| 0.6542 | 4.0 | 40 | 0.6959 | 0.5343 |
| 0.5813 | 5.0 | 50 | 0.7287 | 0.5560 |
| 0.4761 | 6.0 | 60 | 0.7502 | 0.5632 |
| 0.3513 | 7.0 | 70 | 0.8904 | 0.5776 |
| 0.2219 | 8.0 | 80 | 1.1421 | 0.5415 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.2.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_lda_100_v1_book_rte
This model is a fine-tuned version of [gokulsrinivasagan/distilbert_lda_100_v1_book](https://huggingface.co/gokulsrinivasagan/distilbert_lda_100_v1_book) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6902
- Accuracy: 0.5271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7006 | 1.0 | 10 | 0.6935 | 0.4765 |
| 0.6908 | 2.0 | 20 | 0.6908 | 0.5199 |
| 0.6809 | 3.0 | 30 | 0.6902 | 0.5271 |
| 0.6542 | 4.0 | 40 | 0.6959 | 0.5343 |
| 0.5813 | 5.0 | 50 | 0.7287 | 0.5560 |
| 0.4761 | 6.0 | 60 | 0.7502 | 0.5632 |
| 0.3513 | 7.0 | 70 | 0.8904 | 0.5776 |
| 0.2219 | 8.0 | 80 | 1.1421 | 0.5415 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.2.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.1
|
{"base_model": "gokulsrinivasagan/distilbert_lda_100_v1_book", "datasets": ["glue"], "language": ["en"], "library_name": "transformers", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert_lda_100_v1_book_rte", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE RTE", "type": "glue", "args": "rte"}, "metrics": [{"type": "accuracy", "value": 0.5270758122743683, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,496 |
tner/bert-large-tweetner7-2021
|
tner
|
token-classification
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"dataset:tner/tweetner7",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-07-12T09:24:07Z |
2022-09-27T15:21:36+00:00
| 15 | 0 |
---
datasets:
- tner/tweetner7
metrics:
- f1
- precision
- recall
pipeline_tag: token-classification
widget:
- text: 'Get the all-analog Classic Vinyl Edition of `Takin'' Off` Album from {@herbiehancock@}
via {@bluenoterecords@} link below: {{URL}}'
example_title: NER Example 1
model-index:
- name: tner/bert-large-tweetner7-2021
results:
- task:
type: token-classification
name: Token Classification
dataset:
name: tner/tweetner7
type: tner/tweetner7
args: tner/tweetner7
metrics:
- type: f1
value: 0.5974718775368201
name: F1 (test_2021)
- type: precision
value: 0.5992091183996279
name: Precision (test_2021)
- type: recall
value: 0.5957446808510638
name: Recall (test_2021)
- type: f1_macro
value: 0.5392877076670867
name: Macro F1 (test_2021)
- type: precision_macro
value: 0.5398425980592713
name: Macro Precision (test_2021)
- type: recall_macro
value: 0.5439768272225339
name: Macro Recall (test_2021)
- type: f1_entity_span
value: 0.7497514474530674
name: Entity Span F1 (test_2021)
- type: precision_entity_span
value: 0.7584003786086133
name: Entity Span Precision (test_2020)
- type: recall_entity_span
value: 0.7412975598473459
name: Entity Span Recall (test_2021)
- type: f1
value: 0.5662616558349817
name: F1 (test_2020)
- type: precision
value: 0.6215880893300249
name: Precision (test_2020)
- type: recall
value: 0.519979242345615
name: Recall (test_2020)
- type: f1_macro
value: 0.5096985017746614
name: Macro F1 (test_2020)
- type: precision_macro
value: 0.5628721370469417
name: Macro Precision (test_2020)
- type: recall_macro
value: 0.47520198274721537
name: Macro Recall (test_2020)
- type: f1_entity_span
value: 0.7065868263473053
name: Entity Span F1 (test_2020)
- type: precision_entity_span
value: 0.7841772151898734
name: Entity Span Precision (test_2020)
- type: recall_entity_span
value: 0.6429683445770628
name: Entity Span Recall (test_2020)
---
# tner/bert-large-tweetner7-2021
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the
[tner/tweetner7](https://huggingface.co/datasets/tner/tweetner7) dataset (`train_2021` split).
Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
for more detail). It achieves the following results on the test set of 2021:
- F1 (micro): 0.5974718775368201
- Precision (micro): 0.5992091183996279
- Recall (micro): 0.5957446808510638
- F1 (macro): 0.5392877076670867
- Precision (macro): 0.5398425980592713
- Recall (macro): 0.5439768272225339
The per-entity breakdown of the F1 score on the test set are below:
- corporation: 0.4486772486772486
- creative_work: 0.34173228346456697
- event: 0.40238450074515647
- group: 0.556795797767564
- location: 0.6394904458598726
- person: 0.7940364439536168
- product: 0.5918972332015809
For F1 scores, the confidence interval is obtained by bootstrap as below:
- F1 (micro):
- 90%: [0.5884763705775744, 0.6075466841645367]
- 95%: [0.586724466800271, 0.6087071446445204]
- F1 (macro):
- 90%: [0.5884763705775744, 0.6075466841645367]
- 95%: [0.586724466800271, 0.6087071446445204]
Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/bert-large-tweetner7-2021/raw/main/eval/metric.json)
and [metric file of entity span](https://huggingface.co/tner/bert-large-tweetner7-2021/raw/main/eval/metric_span.json).
### Usage
This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip.
```shell
pip install tner
```
[TweetNER7](https://huggingface.co/datasets/tner/tweetner7) pre-processed tweets where the account name and URLs are
converted into special formats (see the dataset page for more detail), so we process tweets accordingly and then run the model prediction as below.
```python
import re
from urlextract import URLExtract
from tner import TransformersNER
extractor = URLExtract()
def format_tweet(tweet):
# mask web urls
urls = extractor.find_urls(tweet)
for url in urls:
tweet = tweet.replace(url, "{{URL}}")
# format twitter account
tweet = re.sub(r"\b(\s*)(@[\S]+)\b", r'\1{\2@}', tweet)
return tweet
text = "Get the all-analog Classic Vinyl Edition of `Takin' Off` Album from @herbiehancock via @bluenoterecords link below: http://bluenote.lnk.to/AlbumOfTheWeek"
text_format = format_tweet(text)
model = TransformersNER("tner/bert-large-tweetner7-2021")
model.predict([text_format])
```
It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment.
### Training hyperparameters
The following hyperparameters were used during training:
- dataset: ['tner/tweetner7']
- dataset_split: train_2021
- dataset_name: None
- local_dataset: None
- model: bert-large-cased
- crf: False
- max_length: 128
- epoch: 30
- batch_size: 32
- lr: 0.0001
- random_seed: 0
- gradient_accumulation_steps: 1
- weight_decay: 1e-07
- lr_warmup_step_ratio: 0.3
- max_grad_norm: 1
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/bert-large-tweetner7-2021/raw/main/trainer_config.json).
### Reference
If you use the model, please cite T-NER paper and TweetNER7 paper.
- T-NER
```
@inproceedings{ushio-camacho-collados-2021-ner,
title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition",
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.eacl-demos.7",
doi = "10.18653/v1/2021.eacl-demos.7",
pages = "53--62",
abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.",
}
```
- TweetNER7
```
@inproceedings{ushio-etal-2022-tweet,
title = "{N}amed {E}ntity {R}ecognition in {T}witter: {A} {D}ataset and {A}nalysis on {S}hort-{T}erm {T}emporal {S}hifts",
author = "Ushio, Asahi and
Neves, Leonardo and
Silva, Vitor and
Barbieri, Francesco. and
Camacho-Collados, Jose",
booktitle = "The 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing",
month = nov,
year = "2022",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
| null |
Non_BioNLP
|
# tner/bert-large-tweetner7-2021
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the
[tner/tweetner7](https://huggingface.co/datasets/tner/tweetner7) dataset (`train_2021` split).
Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
for more detail). It achieves the following results on the test set of 2021:
- F1 (micro): 0.5974718775368201
- Precision (micro): 0.5992091183996279
- Recall (micro): 0.5957446808510638
- F1 (macro): 0.5392877076670867
- Precision (macro): 0.5398425980592713
- Recall (macro): 0.5439768272225339
The per-entity breakdown of the F1 score on the test set are below:
- corporation: 0.4486772486772486
- creative_work: 0.34173228346456697
- event: 0.40238450074515647
- group: 0.556795797767564
- location: 0.6394904458598726
- person: 0.7940364439536168
- product: 0.5918972332015809
For F1 scores, the confidence interval is obtained by bootstrap as below:
- F1 (micro):
- 90%: [0.5884763705775744, 0.6075466841645367]
- 95%: [0.586724466800271, 0.6087071446445204]
- F1 (macro):
- 90%: [0.5884763705775744, 0.6075466841645367]
- 95%: [0.586724466800271, 0.6087071446445204]
Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/bert-large-tweetner7-2021/raw/main/eval/metric.json)
and [metric file of entity span](https://huggingface.co/tner/bert-large-tweetner7-2021/raw/main/eval/metric_span.json).
### Usage
This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip.
```shell
pip install tner
```
[TweetNER7](https://huggingface.co/datasets/tner/tweetner7) pre-processed tweets where the account name and URLs are
converted into special formats (see the dataset page for more detail), so we process tweets accordingly and then run the model prediction as below.
```python
import re
from urlextract import URLExtract
from tner import TransformersNER
extractor = URLExtract()
def format_tweet(tweet):
# mask web urls
urls = extractor.find_urls(tweet)
for url in urls:
tweet = tweet.replace(url, "{{URL}}")
# format twitter account
tweet = re.sub(r"\b(\s*)(@[\S]+)\b", r'\1{\2@}', tweet)
return tweet
text = "Get the all-analog Classic Vinyl Edition of `Takin' Off` Album from @herbiehancock via @bluenoterecords link below: http://bluenote.lnk.to/AlbumOfTheWeek"
text_format = format_tweet(text)
model = TransformersNER("tner/bert-large-tweetner7-2021")
model.predict([text_format])
```
It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment.
### Training hyperparameters
The following hyperparameters were used during training:
- dataset: ['tner/tweetner7']
- dataset_split: train_2021
- dataset_name: None
- local_dataset: None
- model: bert-large-cased
- crf: False
- max_length: 128
- epoch: 30
- batch_size: 32
- lr: 0.0001
- random_seed: 0
- gradient_accumulation_steps: 1
- weight_decay: 1e-07
- lr_warmup_step_ratio: 0.3
- max_grad_norm: 1
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/bert-large-tweetner7-2021/raw/main/trainer_config.json).
### Reference
If you use the model, please cite T-NER paper and TweetNER7 paper.
- T-NER
```
@inproceedings{ushio-camacho-collados-2021-ner,
title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition",
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.eacl-demos.7",
doi = "10.18653/v1/2021.eacl-demos.7",
pages = "53--62",
abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.",
}
```
- TweetNER7
```
@inproceedings{ushio-etal-2022-tweet,
title = "{N}amed {E}ntity {R}ecognition in {T}witter: {A} {D}ataset and {A}nalysis on {S}hort-{T}erm {T}emporal {S}hifts",
author = "Ushio, Asahi and
Neves, Leonardo and
Silva, Vitor and
Barbieri, Francesco. and
Camacho-Collados, Jose",
booktitle = "The 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing",
month = nov,
year = "2022",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
{"datasets": ["tner/tweetner7"], "metrics": ["f1", "precision", "recall"], "pipeline_tag": "token-classification", "widget": [{"text": "Get the all-analog Classic Vinyl Edition of `Takin' Off` Album from {@herbiehancock@} via {@bluenoterecords@} link below: {{URL}}", "example_title": "NER Example 1"}], "model-index": [{"name": "tner/bert-large-tweetner7-2021", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "tner/tweetner7", "type": "tner/tweetner7", "args": "tner/tweetner7"}, "metrics": [{"type": "f1", "value": 0.5974718775368201, "name": "F1 (test_2021)"}, {"type": "precision", "value": 0.5992091183996279, "name": "Precision (test_2021)"}, {"type": "recall", "value": 0.5957446808510638, "name": "Recall (test_2021)"}, {"type": "f1_macro", "value": 0.5392877076670867, "name": "Macro F1 (test_2021)"}, {"type": "precision_macro", "value": 0.5398425980592713, "name": "Macro Precision (test_2021)"}, {"type": "recall_macro", "value": 0.5439768272225339, "name": "Macro Recall (test_2021)"}, {"type": "f1_entity_span", "value": 0.7497514474530674, "name": "Entity Span F1 (test_2021)"}, {"type": "precision_entity_span", "value": 0.7584003786086133, "name": "Entity Span Precision (test_2020)"}, {"type": "recall_entity_span", "value": 0.7412975598473459, "name": "Entity Span Recall (test_2021)"}, {"type": "f1", "value": 0.5662616558349817, "name": "F1 (test_2020)"}, {"type": "precision", "value": 0.6215880893300249, "name": "Precision (test_2020)"}, {"type": "recall", "value": 0.519979242345615, "name": "Recall (test_2020)"}, {"type": "f1_macro", "value": 0.5096985017746614, "name": "Macro F1 (test_2020)"}, {"type": "precision_macro", "value": 0.5628721370469417, "name": "Macro Precision (test_2020)"}, {"type": "recall_macro", "value": 0.47520198274721537, "name": "Macro Recall (test_2020)"}, {"type": "f1_entity_span", "value": 0.7065868263473053, "name": "Entity Span F1 (test_2020)"}, {"type": "precision_entity_span", "value": 0.7841772151898734, "name": "Entity Span Precision (test_2020)"}, {"type": "recall_entity_span", "value": 0.6429683445770628, "name": "Entity Span Recall (test_2020)"}]}]}]}
|
task
|
[
"NAMED_ENTITY_RECOGNITION"
] | 42,497 |
myqq/mymodel
|
myqq
|
image-text-to-text
|
[
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"multimodal",
"conversational",
"en",
"arxiv:2308.12966",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-09-10T03:55:26Z |
2024-09-10T07:28:43+00:00
| 0 | 0 |
---
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: image-text-to-text
tags:
- multimodal
---
# Qwen2-VL-2B-Instruct
## Introduction
We're excited to unveil **Qwen2-VL**, the latest iteration of our Qwen-VL model, representing nearly a year of innovation.
### What’s New in Qwen2-VL?
#### Key Enhancements:
* **SoTA understanding of images of various resolution & ratio**: Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc.
* **Understanding videos of 20min+**: Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc.
* **Agent that can operate your mobiles, robots, etc.**: with the abilities of complex reasoning and decision making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions.
* **Multilingual Support**: to serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images, including most European languages, Japanese, Korean, Arabic, Vietnamese, etc.
#### Model Architecture Updates:
* **Naive Dynamic Resolution**: Unlike before, Qwen2-VL can handle arbitrary image resolutions, mapping them into a dynamic number of visual tokens, offering a more human-like visual processing experience.
<p align="center">
<img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/qwen2_vl.jpg" width="80%"/>
<p>
* **Multimodal Rotary Position Embedding (M-ROPE)**: Decomposes positional embedding into parts to capture 1D textual, 2D visual, and 3D video positional information, enhancing its multimodal processing capabilities.
<p align="center">
<img src="http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/mrope.png" width="80%"/>
<p>
We have three models with 2, 7 and 72 billion parameters. This repo contains the instruction-tuned 2B Qwen2-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2-vl/) and [GitHub](https://github.com/QwenLM/Qwen2-VL).
## Evaluation
### Image Benchmarks
| Benchmark | InternVL2-2B | MiniCPM-V 2.0 | **Qwen2-VL-2B** |
| :--- | :---: | :---: | :---: |
| MMMU<sub>val</sub> | 36.3 | 38.2 | **41.1** |
| DocVQA<sub>test</sub> | 86.9 | - | **90.1** |
| InfoVQA<sub>test</sub> | 58.9 | - | **65.5** |
| ChartQA<sub>test</sub> | **76.2** | - | 73.5 |
| TextVQA<sub>val</sub> | 73.4 | - | **79.7** |
| OCRBench | 781 | 605 | **794** |
| MTVQA | - | - | **20.0** |
| VCR<sub>en easy</sub> | - | - | **81.45**
| VCR<sub>zh easy</sub> | - | - | **46.16**
| RealWorldQA | 57.3 | 55.8 | **62.9** |
| MME<sub>sum</sub> | **1876.8** | 1808.6 | 1872.0 |
| MMBench-EN<sub>test</sub> | 73.2 | 69.1 | **74.9** |
| MMBench-CN<sub>test</sub> | 70.9 | 66.5 | **73.5** |
| MMBench-V1.1<sub>test</sub> | 69.6 | 65.8 | **72.2** |
| MMT-Bench<sub>test</sub> | - | - | **54.5** |
| MMStar | **49.8** | 39.1 | 48.0 |
| MMVet<sub>GPT-4-Turbo</sub> | 39.7 | 41.0 | **49.5** |
| HallBench<sub>avg</sub> | 38.0 | 36.1 | **41.7** |
| MathVista<sub>testmini</sub> | **46.0** | 39.8 | 43.0 |
| MathVision | - | - | **12.4** |
### Video Benchmarks
| Benchmark | **Qwen2-VL-2B** |
| :--- | :---: |
| MVBench | **63.2** |
| PerceptionTest<sub>test</sub> | **53.9** |
| EgoSchema<sub>test</sub> | **54.9** |
| Video-MME<sub>wo/w subs</sub> | **55.6**/**60.4** |
## Requirements
The code of Qwen2-VL has been in the latest Hugging face transformers and we advise you to build from source with command `pip install git+https://github.com/huggingface/transformers`, or you might encounter the following error:
```
KeyError: 'qwen2_vl'
```
## Quickstart
We offer a toolkit to help you handle various types of visual input more conveniently. This includes base64, URLs, and interleaved images and videos. You can install it using the following command:
```bash
pip install qwen-vl-utils
```
Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`:
```python
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
# default: Load the model on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-2B-Instruct", torch_dtype="auto", device_map="auto"
)
# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
# model = Qwen2VLForConditionalGeneration.from_pretrained(
# "Qwen/Qwen2-VL-2B-Instruct",
# torch_dtype=torch.bfloat16,
# attn_implementation="flash_attention_2",
# device_map="auto",
# )
# default processer
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct")
# The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage.
# min_pixels = 256*28*28
# max_pixels = 1280*28*28
# processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels)
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
<details>
<summary>Without qwen_vl_utils</summary>
```python
from PIL import Image
import requests
import torch
from torchvision import io
from typing import Dict
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
# Load the model in half-precision on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-2B-Instruct", torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct")
# Image
url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
conversation = [
{
"role": "user",
"content": [
{
"type": "image",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preprocess the inputs
text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
# Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe this image.<|im_end|>\n<|im_start|>assistant\n'
inputs = processor(
text=[text_prompt], images=[image], padding=True, return_tensors="pt"
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
output_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids = [
output_ids[len(input_ids) :]
for input_ids, output_ids in zip(inputs.input_ids, output_ids)
]
output_text = processor.batch_decode(
generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
)
print(output_text)
```
</details>
<details>
<summary>Multi image inference</summary>
```python
# Messages containing multiple images and a text query
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "Identify the similarities between these images."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
<details>
<summary>Video inference</summary>
```python
# Messages containing a images list as a video and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": [
"file:///path/to/frame1.jpg",
"file:///path/to/frame2.jpg",
"file:///path/to/frame3.jpg",
"file:///path/to/frame4.jpg",
],
"fps": 1.0,
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Messages containing a video and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": "file:///path/to/video1.mp4",
"max_pixels": 360 * 420,
"fps": 1.0,
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
<details>
<summary>Batch inference</summary>
```python
# Sample messages for batch inference
messages1 = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "What are the common elements in these pictures?"},
],
}
]
messages2 = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who are you?"},
]
# Combine messages for batch processing
messages = [messages1, messages1]
# Preparation for batch inference
texts = [
processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)
for msg in messages
]
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=texts,
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Batch Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_texts = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_texts)
```
</details>
### More Usage Tips
For input images, we support local files, base64, and URLs. For videos, we currently only support local files.
```python
# You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text.
## Local file path
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Image URL
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "http://path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Base64 encoded image
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "data:image;base64,/9j/..."},
{"type": "text", "text": "Describe this image."},
],
}
]
```
#### Image Resolution for performance boost
The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage.
```python
min_pixels = 256 * 28 * 28
max_pixels = 1280 * 28 * 28
processor = AutoProcessor.from_pretrained(
"Qwen/Qwen2-VL-2B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels
)
```
Besides, We provide two methods for fine-grained control over the image size input to the model:
1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels.
2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28.
```python
# min_pixels and max_pixels
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"resized_height": 280,
"resized_width": 420,
},
{"type": "text", "text": "Describe this image."},
],
}
]
# resized_height and resized_width
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"min_pixels": 50176,
"max_pixels": 50176,
},
{"type": "text", "text": "Describe this image."},
],
}
]
```
## Limitations
While Qwen2-VL are applicable to a wide range of visual tasks, it is equally important to understand its limitations. Here are some known restrictions:
1. Lack of Audio Support: The current model does **not comprehend audio information** within videos.
2. Data timeliness: Our image dataset is **updated until June 2023**, and information subsequent to this date may not be covered.
3. Constraints in Individuals and Intellectual Property (IP): The model's capacity to recognize specific individuals or IPs is limited, potentially failing to comprehensively cover all well-known personalities or brands.
4. Limited Capacity for Complex Instruction: When faced with intricate multi-step instructions, the model's understanding and execution capabilities require enhancement.
5. Insufficient Counting Accuracy: Particularly in complex scenes, the accuracy of object counting is not high, necessitating further improvements.
6. Weak Spatial Reasoning Skills: Especially in 3D spaces, the model's inference of object positional relationships is inadequate, making it difficult to precisely judge the relative positions of objects.
These limitations serve as ongoing directions for model optimization and improvement, and we are committed to continually enhancing the model's performance and scope of application.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{Qwen2-VL,
title={Qwen2-VL},
author={Qwen team},
year={2024}
}
@article{Qwen-VL,
title={Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond},
author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren},
journal={arXiv preprint arXiv:2308.12966},
year={2023}
}
```
| null |
Non_BioNLP
|
# Qwen2-VL-2B-Instruct
## Introduction
We're excited to unveil **Qwen2-VL**, the latest iteration of our Qwen-VL model, representing nearly a year of innovation.
### What’s New in Qwen2-VL?
#### Key Enhancements:
* **SoTA understanding of images of various resolution & ratio**: Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc.
* **Understanding videos of 20min+**: Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc.
* **Agent that can operate your mobiles, robots, etc.**: with the abilities of complex reasoning and decision making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions.
* **Multilingual Support**: to serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images, including most European languages, Japanese, Korean, Arabic, Vietnamese, etc.
#### Model Architecture Updates:
* **Naive Dynamic Resolution**: Unlike before, Qwen2-VL can handle arbitrary image resolutions, mapping them into a dynamic number of visual tokens, offering a more human-like visual processing experience.
<p align="center">
<img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/qwen2_vl.jpg" width="80%"/>
<p>
* **Multimodal Rotary Position Embedding (M-ROPE)**: Decomposes positional embedding into parts to capture 1D textual, 2D visual, and 3D video positional information, enhancing its multimodal processing capabilities.
<p align="center">
<img src="http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/mrope.png" width="80%"/>
<p>
We have three models with 2, 7 and 72 billion parameters. This repo contains the instruction-tuned 2B Qwen2-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2-vl/) and [GitHub](https://github.com/QwenLM/Qwen2-VL).
## Evaluation
### Image Benchmarks
| Benchmark | InternVL2-2B | MiniCPM-V 2.0 | **Qwen2-VL-2B** |
| :--- | :---: | :---: | :---: |
| MMMU<sub>val</sub> | 36.3 | 38.2 | **41.1** |
| DocVQA<sub>test</sub> | 86.9 | - | **90.1** |
| InfoVQA<sub>test</sub> | 58.9 | - | **65.5** |
| ChartQA<sub>test</sub> | **76.2** | - | 73.5 |
| TextVQA<sub>val</sub> | 73.4 | - | **79.7** |
| OCRBench | 781 | 605 | **794** |
| MTVQA | - | - | **20.0** |
| VCR<sub>en easy</sub> | - | - | **81.45**
| VCR<sub>zh easy</sub> | - | - | **46.16**
| RealWorldQA | 57.3 | 55.8 | **62.9** |
| MME<sub>sum</sub> | **1876.8** | 1808.6 | 1872.0 |
| MMBench-EN<sub>test</sub> | 73.2 | 69.1 | **74.9** |
| MMBench-CN<sub>test</sub> | 70.9 | 66.5 | **73.5** |
| MMBench-V1.1<sub>test</sub> | 69.6 | 65.8 | **72.2** |
| MMT-Bench<sub>test</sub> | - | - | **54.5** |
| MMStar | **49.8** | 39.1 | 48.0 |
| MMVet<sub>GPT-4-Turbo</sub> | 39.7 | 41.0 | **49.5** |
| HallBench<sub>avg</sub> | 38.0 | 36.1 | **41.7** |
| MathVista<sub>testmini</sub> | **46.0** | 39.8 | 43.0 |
| MathVision | - | - | **12.4** |
### Video Benchmarks
| Benchmark | **Qwen2-VL-2B** |
| :--- | :---: |
| MVBench | **63.2** |
| PerceptionTest<sub>test</sub> | **53.9** |
| EgoSchema<sub>test</sub> | **54.9** |
| Video-MME<sub>wo/w subs</sub> | **55.6**/**60.4** |
## Requirements
The code of Qwen2-VL has been in the latest Hugging face transformers and we advise you to build from source with command `pip install git+https://github.com/huggingface/transformers`, or you might encounter the following error:
```
KeyError: 'qwen2_vl'
```
## Quickstart
We offer a toolkit to help you handle various types of visual input more conveniently. This includes base64, URLs, and interleaved images and videos. You can install it using the following command:
```bash
pip install qwen-vl-utils
```
Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`:
```python
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
# default: Load the model on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-2B-Instruct", torch_dtype="auto", device_map="auto"
)
# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
# model = Qwen2VLForConditionalGeneration.from_pretrained(
# "Qwen/Qwen2-VL-2B-Instruct",
# torch_dtype=torch.bfloat16,
# attn_implementation="flash_attention_2",
# device_map="auto",
# )
# default processer
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct")
# The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage.
# min_pixels = 256*28*28
# max_pixels = 1280*28*28
# processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels)
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
<details>
<summary>Without qwen_vl_utils</summary>
```python
from PIL import Image
import requests
import torch
from torchvision import io
from typing import Dict
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
# Load the model in half-precision on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-2B-Instruct", torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct")
# Image
url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
conversation = [
{
"role": "user",
"content": [
{
"type": "image",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preprocess the inputs
text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
# Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe this image.<|im_end|>\n<|im_start|>assistant\n'
inputs = processor(
text=[text_prompt], images=[image], padding=True, return_tensors="pt"
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
output_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids = [
output_ids[len(input_ids) :]
for input_ids, output_ids in zip(inputs.input_ids, output_ids)
]
output_text = processor.batch_decode(
generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
)
print(output_text)
```
</details>
<details>
<summary>Multi image inference</summary>
```python
# Messages containing multiple images and a text query
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "Identify the similarities between these images."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
<details>
<summary>Video inference</summary>
```python
# Messages containing a images list as a video and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": [
"file:///path/to/frame1.jpg",
"file:///path/to/frame2.jpg",
"file:///path/to/frame3.jpg",
"file:///path/to/frame4.jpg",
],
"fps": 1.0,
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Messages containing a video and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": "file:///path/to/video1.mp4",
"max_pixels": 360 * 420,
"fps": 1.0,
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
<details>
<summary>Batch inference</summary>
```python
# Sample messages for batch inference
messages1 = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "What are the common elements in these pictures?"},
],
}
]
messages2 = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who are you?"},
]
# Combine messages for batch processing
messages = [messages1, messages1]
# Preparation for batch inference
texts = [
processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)
for msg in messages
]
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=texts,
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Batch Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_texts = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_texts)
```
</details>
### More Usage Tips
For input images, we support local files, base64, and URLs. For videos, we currently only support local files.
```python
# You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text.
## Local file path
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Image URL
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "http://path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Base64 encoded image
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "data:image;base64,/9j/..."},
{"type": "text", "text": "Describe this image."},
],
}
]
```
#### Image Resolution for performance boost
The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage.
```python
min_pixels = 256 * 28 * 28
max_pixels = 1280 * 28 * 28
processor = AutoProcessor.from_pretrained(
"Qwen/Qwen2-VL-2B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels
)
```
Besides, We provide two methods for fine-grained control over the image size input to the model:
1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels.
2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28.
```python
# min_pixels and max_pixels
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"resized_height": 280,
"resized_width": 420,
},
{"type": "text", "text": "Describe this image."},
],
}
]
# resized_height and resized_width
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"min_pixels": 50176,
"max_pixels": 50176,
},
{"type": "text", "text": "Describe this image."},
],
}
]
```
## Limitations
While Qwen2-VL are applicable to a wide range of visual tasks, it is equally important to understand its limitations. Here are some known restrictions:
1. Lack of Audio Support: The current model does **not comprehend audio information** within videos.
2. Data timeliness: Our image dataset is **updated until June 2023**, and information subsequent to this date may not be covered.
3. Constraints in Individuals and Intellectual Property (IP): The model's capacity to recognize specific individuals or IPs is limited, potentially failing to comprehensively cover all well-known personalities or brands.
4. Limited Capacity for Complex Instruction: When faced with intricate multi-step instructions, the model's understanding and execution capabilities require enhancement.
5. Insufficient Counting Accuracy: Particularly in complex scenes, the accuracy of object counting is not high, necessitating further improvements.
6. Weak Spatial Reasoning Skills: Especially in 3D spaces, the model's inference of object positional relationships is inadequate, making it difficult to precisely judge the relative positions of objects.
These limitations serve as ongoing directions for model optimization and improvement, and we are committed to continually enhancing the model's performance and scope of application.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{Qwen2-VL,
title={Qwen2-VL},
author={Qwen team},
year={2024}
}
@article{Qwen-VL,
title={Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond},
author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren},
journal={arXiv preprint arXiv:2308.12966},
year={2023}
}
```
|
{"language": ["en"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "image-text-to-text", "tags": ["multimodal"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,498 |
JEONGHEESIK/distilbert-base-uncased-finetuned-emotion
|
JEONGHEESIK
|
text-classification
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-06-18T11:07:49Z |
2024-06-18T14:40:15+00:00
| 4 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.9265
name: Accuracy
- type: f1
value: 0.9264199668853673
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2099
- Accuracy: 0.9265
- F1: 0.9264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.826 | 1.0 | 250 | 0.3122 | 0.9125 | 0.9116 |
| 0.249 | 2.0 | 500 | 0.2099 | 0.9265 | 0.9264 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2099
- Accuracy: 0.9265
- F1: 0.9264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.826 | 1.0 | 250 | 0.3122 | 0.9125 | 0.9116 |
| 0.249 | 2.0 | 500 | 0.2099 | 0.9265 | 0.9264 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
{"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.9265, "name": "Accuracy"}, {"type": "f1", "value": 0.9264199668853673, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,499 |
MISHANM/Bangla_text_generation_Llama3.2_3B_instruction
|
MISHANM
| null |
[
"transformers",
"safetensors",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"endpoints_compatible",
"region:us"
] | 2024-10-24T12:25:51Z |
2025-01-03T14:51:39+00:00
| 0 | 0 |
---
base_model:
- meta-llama/Llama-3.2-3B-Instruct
library_name: transformers
---
# MISHANM/Bangla_text_generation_Llama3.2_3B_instruction
This model is fine-tuned for the Bangla language, capable of answering queries and translating text from English to Bangla. It leverages advanced natural language processing techniques to provide accurate and context-aware responses.
## Model Details
1. Language: Bangla
2. Tasks: Question Answering, Translation (English to Bangla
3. Base Model: meta-llama/Llama-3.2-3B-Instruct
# Training Details
The model is trained on approx 29K instruction samples.
1. GPUs: 2*AMD Instinct MI210
2. Training Time: 2:56:07 hours
## Inference with HuggingFace
```python3
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# Set the device
device = "cuda" if torch.cuda.is_available() else "cpu"
# Load the fine-tuned model and tokenizer
model_path = "MISHANM/Bangla_text_generation_Llama3.2_3B_instruction"
model = AutoModelForCausalLM.from_pretrained(model_path)
# Wrap the model with DataParallel if multiple GPUs are available
if torch.cuda.device_count() > 1:
print(f"Using {torch.cuda.device_count()} GPUs")
model = torch.nn.DataParallel(model)
# Move the model to the appropriate device
model.to(device)
tokenizer = AutoTokenizer.from_pretrained(model_path)
# Function to generate text
def generate_text(prompt, max_length=1000, temperature=0.9):
# Format the prompt according to the chat template
messages = [
{
"role": "system",
"content": "You are a Bangla language expert and linguist, with same knowledge give answers in Bangla language. ",
},
{"role": "user", "content": prompt}
]
# Apply the chat template
formatted_prompt = f"<|system|>{messages[0]['content']}<|user|>{messages[1]['content']}<|assistant|>"
# Tokenize and generate output
inputs = tokenizer(formatted_prompt, return_tensors="pt").to(device)
output = model.module.generate( # Use model.module for DataParallel
**inputs, max_new_tokens=max_length, temperature=temperature, do_sample=True
)
return tokenizer.decode(output[0], skip_special_tokens=True)
# Example usage
prompt = """Give me a story."""
translated_text = generate_text(prompt)
print(translated_text)
```
## Citation Information
```
@misc{MISHANM/Bangla_text_generation_Llama3.2_3B_instruction,
author = {Mishan Maurya},
title = {Introducing Fine Tuned LLM for Bangla Language},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
}
```
| null |
Non_BioNLP
|
# MISHANM/Bangla_text_generation_Llama3.2_3B_instruction
This model is fine-tuned for the Bangla language, capable of answering queries and translating text from English to Bangla. It leverages advanced natural language processing techniques to provide accurate and context-aware responses.
## Model Details
1. Language: Bangla
2. Tasks: Question Answering, Translation (English to Bangla
3. Base Model: meta-llama/Llama-3.2-3B-Instruct
# Training Details
The model is trained on approx 29K instruction samples.
1. GPUs: 2*AMD Instinct MI210
2. Training Time: 2:56:07 hours
## Inference with HuggingFace
```python3
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# Set the device
device = "cuda" if torch.cuda.is_available() else "cpu"
# Load the fine-tuned model and tokenizer
model_path = "MISHANM/Bangla_text_generation_Llama3.2_3B_instruction"
model = AutoModelForCausalLM.from_pretrained(model_path)
# Wrap the model with DataParallel if multiple GPUs are available
if torch.cuda.device_count() > 1:
print(f"Using {torch.cuda.device_count()} GPUs")
model = torch.nn.DataParallel(model)
# Move the model to the appropriate device
model.to(device)
tokenizer = AutoTokenizer.from_pretrained(model_path)
# Function to generate text
def generate_text(prompt, max_length=1000, temperature=0.9):
# Format the prompt according to the chat template
messages = [
{
"role": "system",
"content": "You are a Bangla language expert and linguist, with same knowledge give answers in Bangla language. ",
},
{"role": "user", "content": prompt}
]
# Apply the chat template
formatted_prompt = f"<|system|>{messages[0]['content']}<|user|>{messages[1]['content']}<|assistant|>"
# Tokenize and generate output
inputs = tokenizer(formatted_prompt, return_tensors="pt").to(device)
output = model.module.generate( # Use model.module for DataParallel
**inputs, max_new_tokens=max_length, temperature=temperature, do_sample=True
)
return tokenizer.decode(output[0], skip_special_tokens=True)
# Example usage
prompt = """Give me a story."""
translated_text = generate_text(prompt)
print(translated_text)
```
## Citation Information
```
@misc{MISHANM/Bangla_text_generation_Llama3.2_3B_instruction,
author = {Mishan Maurya},
title = {Introducing Fine Tuned LLM for Bangla Language},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
}
```
|
{"base_model": ["meta-llama/Llama-3.2-3B-Instruct"], "library_name": "transformers"}
|
task
|
[
"QUESTION_ANSWERING",
"TRANSLATION"
] | 42,500 |
research-dump/bge-base-en-v1.5_wikidata_ent_masked_wikidata_ent_masked
|
research-dump
|
text-classification
|
[
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"region:us"
] | 2025-02-07T07:43:42Z |
2025-02-07T07:43:58+00:00
| 9 | 0 |
---
base_model: BAAI/bge-base-en-v1.5
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: '###Instruction: Multi-class classification, answer with one of the labels:
[delete, keep, speedy delete, comment] : ###Input: Q85993818: Q85993818 : no
description : ( [MASK] | history | links | entity usage | logs ) Non-notable item.
··· 🌸 Rachmat04 · ☕ 13:43, 25 February 2020 (UTC)'
- text: '###Instruction: Multi-class classification, answer with one of the labels:
[delete, keep, speedy delete, comment] : ###Input: Q11843502: Template:Rfd links
Merged with Q4470435 . Succu ([[User talk:Succu| int:Talkpagelinktext ]]) 19:36,
12 February 2014 (UTC)'
- text: '###Instruction: Multi-class classification, answer with one of the labels:
[delete, keep, speedy delete, comment] : ###Input: Q12971777: computational complexity
theory (Q12971777) : theoretical computer science and mathematics theory that
classifies problems according to their inherent difficulty, and relates those
classes to each other : ( [MASK] | history | links | entity usage | logs ) Merged
with Q205084 .
-- Hoistsail ( talk ) 13:06, 9 February 2014 (UTC)'
- text: '###Instruction: Multi-class classification, answer with one of the labels:
[delete, keep, speedy delete, comment] : ###Input: Q4068983: Arequipa (Q4068983)
: genus of plants : ( [MASK] | history | links | entity usage | logs ) Merged
with Q134007 , via The Game -- Alexander Sigachov ( talk ) 09:42, 1 July 2014
(UTC) Not deleted and unmerged. Lymantria ( talk ) 05:40, 2 July 2014 (UTC)'
- text: '###Instruction: Multi-class classification, answer with one of the labels:
[delete, keep, speedy delete, comment] : ###Input: Q5241318: David Wrigley (Q5241318)
: Canadian ice hockey player : ( [MASK] | history | links | entity usage | logs
) Merged with Q1177272 . Jssfrk ( talk ) 17:02, 7 March 2014 (UTC)'
inference: true
---
# SetFit with BAAI/bge-base-en-v1.5
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 5 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1 | <ul><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q16350309: Adrianu (Q16350309) : Wikimedia disambiguation page : ( [MASK] | history | links | entity usage | logs ) Merged with Q13195357 . Holger1959 ( talk ) 04:18, 11 August 2014 (UTC)'</li><li>"###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q3934746: Richard A. Colla (Q3934746) : American actor and film director : ( [MASK] | history | links | entity usage | logs ) Merged in Q3430649 . BohemianRhapsody ( talk ) 21:46, 26 September 2013 (UTC) Isn't this item still in use? -- BeneBot* ( talk ) 21:55, 26 September 2013 (UTC)"</li><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q11060907: Cymo deplanatus (Q11060907) : species of crustacean : ( [MASK] | history | links | entity usage | logs ) Merged into Q13508703 .\n-- Liangent ( talk ) 03:09, 27 November 2013 (UTC)'</li></ul> |
| 4 | <ul><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q10135731: Template:Rfd links Merged with Q7150841 . Kittenono ( talk ) 16:36, 8 August 2013 (UTC)'</li><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q13164625: Guodian Chu Slips (Q13164625) : archaeological discovery in 1993 in Hubei, China : ( [MASK] | history | links | entity usage | logs ) Merged into Q1151566 .\n-- 218.252.84.110 13:55, 30 May 2013 (UTC) '</li><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q16822576: Template:Rfd links Merged with Q16489568 . Pikolas ([[User talk:Pikolas| int:Talkpagelinktext ]]) 14:27, 12 August 2014 (UTC)'</li></ul> |
| 0 | <ul><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q60781646: Yellow Vests movement in France (Q60781646) : Spontaneous social movement in France : ( [MASK] | history | links | entity usage | logs ) Already exists in more complete form at Q58805164 -- Fa suisse ( talk ) 01:40, 8 February 2019 (UTC) Not deleted — MisterSynergy ( talk ) 08:15, 9 February 2019 (UTC)'</li><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q61641722: Clementine Creevy (Q61641722) : American musician, singer and actress : ( [MASK] | history | links | entity usage | logs ) Not notable Baby come ( talk ) 11:42, 7 February 2021 (UTC) Not deleted per above - meets notability -- DannyS712 ( talk ) 23:14, 7 February 2021 (UTC)'</li></ul> |
| 3 | <ul><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q16017531: Template:Rfd links Merged into Q13135852 . -- DracoRoboter ([[User talk:DracoRoboter| int:Talkpagelinktext ]]) 08:36, 21 May 2014 (UTC)'</li><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q12160071: Template:Rfd links Empty item. Aced ( talk ) 17:02, 30 June 2013 (UTC)'</li><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q5591236: Q5591236 : no description : ( [MASK] | history | links | entity usage | logs ) Dup of Q809292 . -- John F. Lewis ( talk ) 23:11, 21 April 2013 (UTC)'</li></ul> |
| 2 | <ul><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q118210393: Malgorzata (Q118210393) : female given name : ( [MASK] | history | links | entity usage | logs ) Mistake Teukros ( talk ) 20:27, 7 May 2023 (UTC) On hold This item is linked from 1 other. -- DeltaBot ( talk ) 20:30, 7 May 2023 (UTC)'</li><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q83807364: SoundCloud account (Q83807364) : user account on SoundCloud : ( [MASK] | history | links | entity usage | logs ) Not used Dartyytrad ( talk ) 21:18, 23 November 2021 (UTC) On hold This item is linked from 8 others. -- DeltaBot ( talk ) 21:20, 23 November 2021 (UTC) seems to be used? unsure if those uses are legit. BrokenSegue ( talk ) 04:06, 25 November 2021 (UTC) I listed one for deletion: Wikidata:Requests_for_deletions#Q83807994 . some should probably be changed to "soundcloud" + soundcloud id. --- Jura 11:45, 11 December 2021 (UTC) seems to be used? unsure if those uses are legit. BrokenSegue ( talk ) 04:06, 25 November 2021 (UTC) I listed one for deletion: Wikidata:Requests_for_deletions#Q83807994 . some should probably be changed to "soundcloud" + soundcloud id. --- Jura 11:45, 11 December 2021 (UTC) I listed one for deletion: Wikidata:Requests_for_deletions#Q83807994 . some should probably be changed to "soundcloud" + soundcloud id. --- Jura 11:45, 11 December 2021 (UTC) seems to be used? unsure if those uses are legit. BrokenSegue ( talk ) 04:06, 25 November 2021 (UTC) I listed one for deletion: Wikidata:Requests_for_deletions#Q83807994 . some should probably be changed to "soundcloud" + soundcloud id. --- Jura 11:45, 11 December 2021 (UTC) I listed one for deletion: Wikidata:Requests_for_deletions#Q83807994 . I listed one for deletion: Wikidata:Requests_for_deletions#Q83807994 . Not done due to uses. Thanks. Mike Peel ( talk ) 18:05, 27 February 2022 (UTC)'</li><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q23636702: Template:Rfd links empty item, no sitelinks, no statements, notability not clear Queryzo ([[User talk:Queryzo| int:Talkpagelinktext ]]) 18:25, 19 May 2017 (UTC)'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("research-dump/bge-base-en-v1.5_wikidata_ent_masked_wikidata_ent_masked")
# Run inference
preds = model("###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q11843502: Template:Rfd links Merged with Q4470435 . Succu ([[User talk:Succu| int:Talkpagelinktext ]]) 19:36, 12 February 2014 (UTC)")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:-----|
| Word count | 28 | 53.7838 | 2279 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 2 |
| 1 | 733 |
| 2 | 18 |
| 3 | 56 |
| 4 | 190 |
### Training Hyperparameters
- batch_size: (8, 2)
- num_epochs: (5, 5)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 10
- body_learning_rate: (1e-05, 1e-05)
- head_learning_rate: 5e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: True
- use_amp: True
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:-----:|:-------------:|:---------------:|
| 0.0004 | 1 | 0.041 | - |
| 0.2002 | 500 | 0.1861 | 0.1338 |
| 0.4003 | 1000 | 0.0927 | 0.1352 |
| 0.6005 | 1500 | 0.0539 | 0.1385 |
| 0.8006 | 2000 | 0.0414 | 0.1415 |
| 1.0008 | 2500 | 0.0284 | 0.1429 |
| 1.2010 | 3000 | 0.0218 | 0.1359 |
| 1.4011 | 3500 | 0.0204 | 0.1388 |
| 1.6013 | 4000 | 0.0184 | 0.1486 |
| 1.8014 | 4500 | 0.0157 | 0.1465 |
| 2.0016 | 5000 | 0.0116 | 0.1530 |
| 2.2018 | 5500 | 0.0088 | 0.1492 |
| 2.4019 | 6000 | 0.0078 | 0.1582 |
| 2.6021 | 6500 | 0.0081 | 0.1680 |
| 2.8022 | 7000 | 0.0062 | 0.1487 |
| 3.0024 | 7500 | 0.0053 | 0.1466 |
| 3.2026 | 8000 | 0.004 | 0.1462 |
| 3.4027 | 8500 | 0.0039 | 0.1489 |
| 3.6029 | 9000 | 0.0025 | 0.1507 |
| 3.8030 | 9500 | 0.0014 | 0.1487 |
| 4.0032 | 10000 | 0.0015 | 0.1471 |
| 4.2034 | 10500 | 0.0017 | 0.1433 |
| 4.4035 | 11000 | 0.001 | 0.1434 |
| 4.6037 | 11500 | 0.0013 | 0.1425 |
| 4.8038 | 12000 | 0.0007 | 0.1436 |
### Framework Versions
- Python: 3.12.7
- SetFit: 1.1.1
- Sentence Transformers: 3.4.1
- Transformers: 4.48.2
- PyTorch: 2.6.0+cu124
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SetFit with BAAI/bge-base-en-v1.5
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 5 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1 | <ul><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q16350309: Adrianu (Q16350309) : Wikimedia disambiguation page : ( [MASK] | history | links | entity usage | logs ) Merged with Q13195357 . Holger1959 ( talk ) 04:18, 11 August 2014 (UTC)'</li><li>"###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q3934746: Richard A. Colla (Q3934746) : American actor and film director : ( [MASK] | history | links | entity usage | logs ) Merged in Q3430649 . BohemianRhapsody ( talk ) 21:46, 26 September 2013 (UTC) Isn't this item still in use? -- BeneBot* ( talk ) 21:55, 26 September 2013 (UTC)"</li><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q11060907: Cymo deplanatus (Q11060907) : species of crustacean : ( [MASK] | history | links | entity usage | logs ) Merged into Q13508703 .\n-- Liangent ( talk ) 03:09, 27 November 2013 (UTC)'</li></ul> |
| 4 | <ul><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q10135731: Template:Rfd links Merged with Q7150841 . Kittenono ( talk ) 16:36, 8 August 2013 (UTC)'</li><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q13164625: Guodian Chu Slips (Q13164625) : archaeological discovery in 1993 in Hubei, China : ( [MASK] | history | links | entity usage | logs ) Merged into Q1151566 .\n-- 218.252.84.110 13:55, 30 May 2013 (UTC) '</li><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q16822576: Template:Rfd links Merged with Q16489568 . Pikolas ([[User talk:Pikolas| int:Talkpagelinktext ]]) 14:27, 12 August 2014 (UTC)'</li></ul> |
| 0 | <ul><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q60781646: Yellow Vests movement in France (Q60781646) : Spontaneous social movement in France : ( [MASK] | history | links | entity usage | logs ) Already exists in more complete form at Q58805164 -- Fa suisse ( talk ) 01:40, 8 February 2019 (UTC) Not deleted — MisterSynergy ( talk ) 08:15, 9 February 2019 (UTC)'</li><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q61641722: Clementine Creevy (Q61641722) : American musician, singer and actress : ( [MASK] | history | links | entity usage | logs ) Not notable Baby come ( talk ) 11:42, 7 February 2021 (UTC) Not deleted per above - meets notability -- DannyS712 ( talk ) 23:14, 7 February 2021 (UTC)'</li></ul> |
| 3 | <ul><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q16017531: Template:Rfd links Merged into Q13135852 . -- DracoRoboter ([[User talk:DracoRoboter| int:Talkpagelinktext ]]) 08:36, 21 May 2014 (UTC)'</li><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q12160071: Template:Rfd links Empty item. Aced ( talk ) 17:02, 30 June 2013 (UTC)'</li><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q5591236: Q5591236 : no description : ( [MASK] | history | links | entity usage | logs ) Dup of Q809292 . -- John F. Lewis ( talk ) 23:11, 21 April 2013 (UTC)'</li></ul> |
| 2 | <ul><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q118210393: Malgorzata (Q118210393) : female given name : ( [MASK] | history | links | entity usage | logs ) Mistake Teukros ( talk ) 20:27, 7 May 2023 (UTC) On hold This item is linked from 1 other. -- DeltaBot ( talk ) 20:30, 7 May 2023 (UTC)'</li><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q83807364: SoundCloud account (Q83807364) : user account on SoundCloud : ( [MASK] | history | links | entity usage | logs ) Not used Dartyytrad ( talk ) 21:18, 23 November 2021 (UTC) On hold This item is linked from 8 others. -- DeltaBot ( talk ) 21:20, 23 November 2021 (UTC) seems to be used? unsure if those uses are legit. BrokenSegue ( talk ) 04:06, 25 November 2021 (UTC) I listed one for deletion: Wikidata:Requests_for_deletions#Q83807994 . some should probably be changed to "soundcloud" + soundcloud id. --- Jura 11:45, 11 December 2021 (UTC) seems to be used? unsure if those uses are legit. BrokenSegue ( talk ) 04:06, 25 November 2021 (UTC) I listed one for deletion: Wikidata:Requests_for_deletions#Q83807994 . some should probably be changed to "soundcloud" + soundcloud id. --- Jura 11:45, 11 December 2021 (UTC) I listed one for deletion: Wikidata:Requests_for_deletions#Q83807994 . some should probably be changed to "soundcloud" + soundcloud id. --- Jura 11:45, 11 December 2021 (UTC) seems to be used? unsure if those uses are legit. BrokenSegue ( talk ) 04:06, 25 November 2021 (UTC) I listed one for deletion: Wikidata:Requests_for_deletions#Q83807994 . some should probably be changed to "soundcloud" + soundcloud id. --- Jura 11:45, 11 December 2021 (UTC) I listed one for deletion: Wikidata:Requests_for_deletions#Q83807994 . I listed one for deletion: Wikidata:Requests_for_deletions#Q83807994 . Not done due to uses. Thanks. Mike Peel ( talk ) 18:05, 27 February 2022 (UTC)'</li><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q23636702: Template:Rfd links empty item, no sitelinks, no statements, notability not clear Queryzo ([[User talk:Queryzo| int:Talkpagelinktext ]]) 18:25, 19 May 2017 (UTC)'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("research-dump/bge-base-en-v1.5_wikidata_ent_masked_wikidata_ent_masked")
# Run inference
preds = model("###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q11843502: Template:Rfd links Merged with Q4470435 . Succu ([[User talk:Succu| int:Talkpagelinktext ]]) 19:36, 12 February 2014 (UTC)")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:-----|
| Word count | 28 | 53.7838 | 2279 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 2 |
| 1 | 733 |
| 2 | 18 |
| 3 | 56 |
| 4 | 190 |
### Training Hyperparameters
- batch_size: (8, 2)
- num_epochs: (5, 5)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 10
- body_learning_rate: (1e-05, 1e-05)
- head_learning_rate: 5e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: True
- use_amp: True
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:-----:|:-------------:|:---------------:|
| 0.0004 | 1 | 0.041 | - |
| 0.2002 | 500 | 0.1861 | 0.1338 |
| 0.4003 | 1000 | 0.0927 | 0.1352 |
| 0.6005 | 1500 | 0.0539 | 0.1385 |
| 0.8006 | 2000 | 0.0414 | 0.1415 |
| 1.0008 | 2500 | 0.0284 | 0.1429 |
| 1.2010 | 3000 | 0.0218 | 0.1359 |
| 1.4011 | 3500 | 0.0204 | 0.1388 |
| 1.6013 | 4000 | 0.0184 | 0.1486 |
| 1.8014 | 4500 | 0.0157 | 0.1465 |
| 2.0016 | 5000 | 0.0116 | 0.1530 |
| 2.2018 | 5500 | 0.0088 | 0.1492 |
| 2.4019 | 6000 | 0.0078 | 0.1582 |
| 2.6021 | 6500 | 0.0081 | 0.1680 |
| 2.8022 | 7000 | 0.0062 | 0.1487 |
| 3.0024 | 7500 | 0.0053 | 0.1466 |
| 3.2026 | 8000 | 0.004 | 0.1462 |
| 3.4027 | 8500 | 0.0039 | 0.1489 |
| 3.6029 | 9000 | 0.0025 | 0.1507 |
| 3.8030 | 9500 | 0.0014 | 0.1487 |
| 4.0032 | 10000 | 0.0015 | 0.1471 |
| 4.2034 | 10500 | 0.0017 | 0.1433 |
| 4.4035 | 11000 | 0.001 | 0.1434 |
| 4.6037 | 11500 | 0.0013 | 0.1425 |
| 4.8038 | 12000 | 0.0007 | 0.1436 |
### Framework Versions
- Python: 3.12.7
- SetFit: 1.1.1
- Sentence Transformers: 3.4.1
- Transformers: 4.48.2
- PyTorch: 2.6.0+cu124
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "BAAI/bge-base-en-v1.5", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q85993818: Q85993818 : no description : ( [MASK] | history | links | entity usage | logs ) Non-notable item. ··· 🌸 Rachmat04 · ☕ 13:43, 25 February 2020 (UTC)"}, {"text": "###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q11843502: Template:Rfd links Merged with Q4470435 . Succu ([[User talk:Succu| int:Talkpagelinktext ]]) 19:36, 12 February 2014 (UTC)"}, {"text": "###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q12971777: computational complexity theory (Q12971777) : theoretical computer science and mathematics theory that classifies problems according to their inherent difficulty, and relates those classes to each other : ( [MASK] | history | links | entity usage | logs ) Merged with Q205084 .\n-- Hoistsail ( talk ) 13:06, 9 February 2014 (UTC)"}, {"text": "###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q4068983: Arequipa (Q4068983) : genus of plants : ( [MASK] | history | links | entity usage | logs ) Merged with Q134007 , via The Game -- Alexander Sigachov ( talk ) 09:42, 1 July 2014 (UTC) Not deleted and unmerged. Lymantria ( talk ) 05:40, 2 July 2014 (UTC)"}, {"text": "###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Q5241318: David Wrigley (Q5241318) : Canadian ice hockey player : ( [MASK] | history | links | entity usage | logs ) Merged with Q1177272 . Jssfrk ( talk ) 17:02, 7 March 2014 (UTC)"}], "inference": true}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,501 |
youdiniplays/filipinolingo
|
youdiniplays
|
translation
|
[
"transformers",
"translation",
"en",
"tl",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2023-12-20T14:52:42Z |
2023-12-20T15:04:40+00:00
| 0 | 0 |
---
language:
- en
- tl
library_name: transformers
license: apache-2.0
tags:
- translation
---
| null |
Non_BioNLP
|
{"language": ["en", "tl"], "library_name": "transformers", "license": "apache-2.0", "tags": ["translation"]}
|
task
|
[
"TRANSLATION"
] | 42,502 |
|
xared1001/bloom-7b1_pytorch
|
xared1001
|
text-generation
|
[
"transformers",
"pytorch",
"bloom",
"text-generation",
"ak",
"ar",
"as",
"bm",
"bn",
"ca",
"code",
"en",
"es",
"eu",
"fon",
"fr",
"gu",
"hi",
"id",
"ig",
"ki",
"kn",
"lg",
"ln",
"ml",
"mr",
"ne",
"nso",
"ny",
"or",
"pa",
"pt",
"rn",
"rw",
"sn",
"st",
"sw",
"ta",
"te",
"tn",
"ts",
"tum",
"tw",
"ur",
"vi",
"wo",
"xh",
"yo",
"zh",
"zhs",
"zht",
"zu",
"arxiv:1909.08053",
"arxiv:2110.02861",
"arxiv:2108.12409",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-06-13T19:47:59Z |
2023-06-13T21:02:39+00:00
| 16 | 0 |
---
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zhs
- zht
- zu
license: bigscience-bloom-rail-1.0
pipeline_tag: text-generation
---
<h1 style='text-align: center '>BLOOM LM</h1>
<h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2>
<h3 style='text-align: center '>Model Card</h3>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1657124309515-5f17f0a0925b9863e28ad517.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Version 1.0 / 26.May.2022
## Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Training Data](#training-data)
4. [Risks and Limitations](#risks-and-limitations)
5. [Evaluation](#evaluation)
6. [Recommendations](#recommendations)
7. [Glossary and Calculations](#glossary-and-calculations)
8. [More Information](#more-information)
9. [Model Card Authors](#model-card-authors)
## Model Details
### Basics
*This section provides information for anyone who wants to know about the model.*
<details>
<summary>Click to expand</summary> <br/>
**Developed by:** BigScience ([website](https://bigscience.huggingface.co))
* All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*
**Model Type:** Transformer-based Language Model
**Version:** 1.0.0
**Languages:** Multiple; see [training data](#training-data)
**License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license))
**Release Date Estimate:** Monday, 11.July.2022
**Send Questions to:** [email protected]
**Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022
**Funded by:**
* The French government.
* Hugging Face ([website](https://huggingface.co)).
* Organizations of contributors. *(Further breakdown of organizations forthcoming.)*
</details>
### Technical Specifications
*This section provides information for people who work on model development.*
<details>
<summary>Click to expand</summary><br/>
Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training.
**Model Architecture:** Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)):
* Decoder-only architecture
* Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf))
* ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions
* 7,069,016,064 parameters:
* 1,027,604,480 embedding parameters
* 30 layers, 32 attention heads
* Hidden layers are 4096-dimensional
* Sequence length of 2048 tokens used (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization))
**Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)).
**Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)).
* Hardware: 384 A100 80GB GPUs (48 nodes):
* Additional 32 A100 80GB GPUs (4 nodes) in reserve
* 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links
* CPU: AMD
* CPU memory: 512GB per node
* GPU memory: 640GB per node
* Inter-node connect: Omni-Path Architecture (OPA)
* NCCL-communications network: a fully dedicated subnet
* Disc IO network: shared network with other types of nodes
* Software:
* Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed))
* DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed))
* PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch))
* apex ([Github link](https://github.com/NVIDIA/apex))
#### **Training**
Training logs: [Tensorboard link](https://huggingface.co/tensorboard/bigscience/tr11c-2B5-logs)
- Number of epochs: 1 (*current target*)
- Dates:
- Started 11th March, 2022 11:42am PST
- Ended 5th July, 2022
- Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments)
- Server training location: Île-de-France, France
#### **Tokenization**
The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a learned subword tokenizer trained using:
- A byte-level Byte Pair Encoding (BPE) algorithm
- A simple pre-tokenization rule, no normalization
- A vocabulary size of 250,680
It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.
</details>
### Environmental Impact
<details>
<summary>Click to expand</summary><br/>
The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.
**Estimated carbon emissions:** *(Forthcoming upon completion of training.)*
**Estimated electricity usage:** *(Forthcoming upon completion of training.)*
</details>
<p> </p>
## Uses
*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.
It provides information for anyone considering using the model or who is affected by the model.*
<details>
<summary>Click to expand</summary><br/>
### Intended Use
This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.
#### **Direct Use**
- Text generation
- Exploring characteristics of language generated by a language model
- Examples: Cloze tests, counterfactuals, generations with reframings
#### **Downstream Use**
- Tasks that leverage language models include: Information Extraction, Question Answering, Summarization
### Misuse and Out-of-scope Use
*This section addresses what users ought not do with the model.*
See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.
#### **Out-of-scope Uses**
Using the model in [high-stakes](#high-stakes) settings is out of scope for this model. The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.
##### Out-of-scope Uses Include:
- Usage in biomedical domains, political and legal domains, or finance domains
- Usage for evaluating or scoring individuals, such as for employment, education, or credit
- Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct
#### **Misuse**
Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes:
- Spam generation
- Disinformation and influence operations
- Disparagement and defamation
- Harassment and abuse
- [Deception](#deception)
- Unconsented impersonation and imitation
- Unconsented surveillance
- Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license)
### Intended Users
#### **Direct Users**
- General Public
- Researchers
- Students
- Educators
- Engineers/developers
- Non-commercial entities
- Community advocates, including human and civil rights groups
#### Indirect Users
- Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use)
- Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license)
#### Others Affected (Parties Prenantes)
- People and groups referred to by the LLM
- People and groups exposed to outputs of, or decisions based on, the LLM
- People and groups whose original work is included in the LLM
</details>
<p> </p>
## Training Data
*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*
<details>
<summary>Click to expand</summary><br/>
Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus).
Training data includes:
- 45 natural languages
- 12 programming languages
- In 1.5TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.)
#### **Languages**
The pie chart shows the distribution of languages in training data.

The following table shows the further distribution of Niger-Congo and Indic languages in the training data.
<details>
<summary>Click to expand</summary><br/>
| Niger Congo | Percentage | | Indic | Percentage |
|----------------|------------ |------ |-----------|------------|
| Chi Tumbuka | 0.00002 | | Assamese | 0.01 |
| Kikuyu | 0.00004 | | Odia | 0.04 |
| Bambara | 0.00004 | | Gujarati | 0.04 |
| Akan | 0.00007 | | Marathi | 0.05 |
| Xitsonga | 0.00007 | | Punjabi | 0.05 |
| Sesotho | 0.00007 | | Kannada | 0.06 |
| Chi Chewa | 0.0001 | | Nepali | 0.07 |
| Setswana | 0.0002 | | Telugu | 0.09 |
| Northern Sotho | 0.0002 | | Malayalam | 0.10 |
| Fon | 0.0002 | | Urdu | 0.10 |
| Kirundi | 0.0003 | | Tamil | 0.20 |
| Wolof | 0.0004 | | Bengali | 0.50 |
| Kuganda | 0.0004 | | Hindi | 0.70 |
| Chi Shona | 0.001 |
| Isi Zulu | 0.001 |
| Igbo | 0.001 |
| Xhosa | 0.001 |
| Kinyarwanda | 0.003 |
| Yoruba | 0.006 |
| Swahili | 0.02 |
</details>
The following table shows the distribution of programming languages.
<details>
<summary>Click to expand</summary><br/>
| Extension | Language | Number of files |
|----------------|------------|-----------------|
| java | Java | 5,407,724 |
| php | PHP | 4,942,186 |
| cpp | C++ | 2,503,930 |
| py | Python | 2,435,072 |
| js | JavaScript | 1,905,518 |
| cs | C# | 1,577,347 |
| rb | Ruby | 6,78,413 |
| cc | C++ | 443,054 |
| hpp | C++ | 391,048 |
| lua | Lua | 352,317 |
| go | GO | 227,763 |
| ts | TypeScript | 195,254 |
| C | C | 134,537 |
| scala | Scala | 92,052 |
| hh | C++ | 67,161 |
| H | C++ | 55,899 |
| tsx | TypeScript | 33,107 |
| rs | Rust | 29,693 |
| phpt | PHP | 9,702 |
| c++ | C++ | 1,342 |
| h++ | C++ | 791 |
| php3 | PHP | 540 |
| phps | PHP | 270 |
| php5 | PHP | 166 |
| php4 | PHP | 29 |
</details>
</details>
<p> </p>
## Risks and Limitations
*This section identifies foreseeable harms and misunderstandings.*
<details>
<summary>Click to expand</summary><br/>
Model may:
- Overrepresent some viewpoints and underrepresent others
- Contain stereotypes
- Contain [personal information](#personal-data-and-information)
- Generate:
- Hateful, abusive, or violent language
- Discriminatory or prejudicial language
- Content that may not be appropriate for all settings, including sexual content
- Make errors, including producing incorrect information as if it were factual
- Generate irrelevant or repetitive outputs
</details>
<p> </p>
## Evaluation
*This section describes the evaluation protocols and provides the results.*
<details>
<summary>Click to expand</summary><br/>
### Metrics
*This section describes the different ways performance is calculated and why.*
Includes:
| Metric | Why chosen |
|--------------------|--------------------------------------------------------------------|
| [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training |
| Cross Entropy [Loss](#loss) | Standard objective for language models. |
And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_
### Factors
*This section lists some different aspects of BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*
- Language, such as English or Yoruba
- Domain, such as newswire or stories
- Demographic characteristics, such as gender or nationality
### Results
*Results are based on the [Factors](#factors) and [Metrics](#metrics).*
**Train-time Evaluation:**
As of 25.May.2022, 15:00 PST:
- Training Loss: 2.3
- Validation Loss: 2.9
- Perplexity: 16
</details>
<p> </p>
## Recommendations
*This section provides information on warnings and potential mitigations.*
<details>
<summary>Click to expand</summary><br/>
- Indirect users should be made aware when the content they're working with is created by the LLM.
- Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary.
- Models pretrained with the LLM should include an updated Model Card.
- Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
</details>
<p> </p>
## Glossary and Calculations
*This section defines common terms and how metrics are calculated.*
<details>
<summary>Click to expand</summary><br/>
- <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss.
- <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.
- <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/).
- <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf).
- <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf).
- <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm).
- <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf))
- <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.
</details>
<p> </p>
## More Information
<details>
<summary>Click to expand</summary><br/>
### Dataset Creation
Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling
### Technical Specifications
Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours
More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model
Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss
Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md
Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md
### Initial Results
Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book
</details>
<p> </p>
## Model Card Authors
*Ordered roughly chronologically and by amount of time spent.*
Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff
| null |
Non_BioNLP
|
<h1 style='text-align: center '>BLOOM LM</h1>
<h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2>
<h3 style='text-align: center '>Model Card</h3>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1657124309515-5f17f0a0925b9863e28ad517.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Version 1.0 / 26.May.2022
## Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Training Data](#training-data)
4. [Risks and Limitations](#risks-and-limitations)
5. [Evaluation](#evaluation)
6. [Recommendations](#recommendations)
7. [Glossary and Calculations](#glossary-and-calculations)
8. [More Information](#more-information)
9. [Model Card Authors](#model-card-authors)
## Model Details
### Basics
*This section provides information for anyone who wants to know about the model.*
<details>
<summary>Click to expand</summary> <br/>
**Developed by:** BigScience ([website](https://bigscience.huggingface.co))
* All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*
**Model Type:** Transformer-based Language Model
**Version:** 1.0.0
**Languages:** Multiple; see [training data](#training-data)
**License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license))
**Release Date Estimate:** Monday, 11.July.2022
**Send Questions to:** [email protected]
**Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022
**Funded by:**
* The French government.
* Hugging Face ([website](https://huggingface.co)).
* Organizations of contributors. *(Further breakdown of organizations forthcoming.)*
</details>
### Technical Specifications
*This section provides information for people who work on model development.*
<details>
<summary>Click to expand</summary><br/>
Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training.
**Model Architecture:** Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)):
* Decoder-only architecture
* Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf))
* ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions
* 7,069,016,064 parameters:
* 1,027,604,480 embedding parameters
* 30 layers, 32 attention heads
* Hidden layers are 4096-dimensional
* Sequence length of 2048 tokens used (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization))
**Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)).
**Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)).
* Hardware: 384 A100 80GB GPUs (48 nodes):
* Additional 32 A100 80GB GPUs (4 nodes) in reserve
* 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links
* CPU: AMD
* CPU memory: 512GB per node
* GPU memory: 640GB per node
* Inter-node connect: Omni-Path Architecture (OPA)
* NCCL-communications network: a fully dedicated subnet
* Disc IO network: shared network with other types of nodes
* Software:
* Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed))
* DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed))
* PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch))
* apex ([Github link](https://github.com/NVIDIA/apex))
#### **Training**
Training logs: [Tensorboard link](https://huggingface.co/tensorboard/bigscience/tr11c-2B5-logs)
- Number of epochs: 1 (*current target*)
- Dates:
- Started 11th March, 2022 11:42am PST
- Ended 5th July, 2022
- Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments)
- Server training location: Île-de-France, France
#### **Tokenization**
The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a learned subword tokenizer trained using:
- A byte-level Byte Pair Encoding (BPE) algorithm
- A simple pre-tokenization rule, no normalization
- A vocabulary size of 250,680
It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.
</details>
### Environmental Impact
<details>
<summary>Click to expand</summary><br/>
The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.
**Estimated carbon emissions:** *(Forthcoming upon completion of training.)*
**Estimated electricity usage:** *(Forthcoming upon completion of training.)*
</details>
<p> </p>
## Uses
*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.
It provides information for anyone considering using the model or who is affected by the model.*
<details>
<summary>Click to expand</summary><br/>
### Intended Use
This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.
#### **Direct Use**
- Text generation
- Exploring characteristics of language generated by a language model
- Examples: Cloze tests, counterfactuals, generations with reframings
#### **Downstream Use**
- Tasks that leverage language models include: Information Extraction, Question Answering, Summarization
### Misuse and Out-of-scope Use
*This section addresses what users ought not do with the model.*
See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.
#### **Out-of-scope Uses**
Using the model in [high-stakes](#high-stakes) settings is out of scope for this model. The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.
##### Out-of-scope Uses Include:
- Usage in biomedical domains, political and legal domains, or finance domains
- Usage for evaluating or scoring individuals, such as for employment, education, or credit
- Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct
#### **Misuse**
Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes:
- Spam generation
- Disinformation and influence operations
- Disparagement and defamation
- Harassment and abuse
- [Deception](#deception)
- Unconsented impersonation and imitation
- Unconsented surveillance
- Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license)
### Intended Users
#### **Direct Users**
- General Public
- Researchers
- Students
- Educators
- Engineers/developers
- Non-commercial entities
- Community advocates, including human and civil rights groups
#### Indirect Users
- Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use)
- Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license)
#### Others Affected (Parties Prenantes)
- People and groups referred to by the LLM
- People and groups exposed to outputs of, or decisions based on, the LLM
- People and groups whose original work is included in the LLM
</details>
<p> </p>
## Training Data
*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*
<details>
<summary>Click to expand</summary><br/>
Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus).
Training data includes:
- 45 natural languages
- 12 programming languages
- In 1.5TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.)
#### **Languages**
The pie chart shows the distribution of languages in training data.

The following table shows the further distribution of Niger-Congo and Indic languages in the training data.
<details>
<summary>Click to expand</summary><br/>
| Niger Congo | Percentage | | Indic | Percentage |
|----------------|------------ |------ |-----------|------------|
| Chi Tumbuka | 0.00002 | | Assamese | 0.01 |
| Kikuyu | 0.00004 | | Odia | 0.04 |
| Bambara | 0.00004 | | Gujarati | 0.04 |
| Akan | 0.00007 | | Marathi | 0.05 |
| Xitsonga | 0.00007 | | Punjabi | 0.05 |
| Sesotho | 0.00007 | | Kannada | 0.06 |
| Chi Chewa | 0.0001 | | Nepali | 0.07 |
| Setswana | 0.0002 | | Telugu | 0.09 |
| Northern Sotho | 0.0002 | | Malayalam | 0.10 |
| Fon | 0.0002 | | Urdu | 0.10 |
| Kirundi | 0.0003 | | Tamil | 0.20 |
| Wolof | 0.0004 | | Bengali | 0.50 |
| Kuganda | 0.0004 | | Hindi | 0.70 |
| Chi Shona | 0.001 |
| Isi Zulu | 0.001 |
| Igbo | 0.001 |
| Xhosa | 0.001 |
| Kinyarwanda | 0.003 |
| Yoruba | 0.006 |
| Swahili | 0.02 |
</details>
The following table shows the distribution of programming languages.
<details>
<summary>Click to expand</summary><br/>
| Extension | Language | Number of files |
|----------------|------------|-----------------|
| java | Java | 5,407,724 |
| php | PHP | 4,942,186 |
| cpp | C++ | 2,503,930 |
| py | Python | 2,435,072 |
| js | JavaScript | 1,905,518 |
| cs | C# | 1,577,347 |
| rb | Ruby | 6,78,413 |
| cc | C++ | 443,054 |
| hpp | C++ | 391,048 |
| lua | Lua | 352,317 |
| go | GO | 227,763 |
| ts | TypeScript | 195,254 |
| C | C | 134,537 |
| scala | Scala | 92,052 |
| hh | C++ | 67,161 |
| H | C++ | 55,899 |
| tsx | TypeScript | 33,107 |
| rs | Rust | 29,693 |
| phpt | PHP | 9,702 |
| c++ | C++ | 1,342 |
| h++ | C++ | 791 |
| php3 | PHP | 540 |
| phps | PHP | 270 |
| php5 | PHP | 166 |
| php4 | PHP | 29 |
</details>
</details>
<p> </p>
## Risks and Limitations
*This section identifies foreseeable harms and misunderstandings.*
<details>
<summary>Click to expand</summary><br/>
Model may:
- Overrepresent some viewpoints and underrepresent others
- Contain stereotypes
- Contain [personal information](#personal-data-and-information)
- Generate:
- Hateful, abusive, or violent language
- Discriminatory or prejudicial language
- Content that may not be appropriate for all settings, including sexual content
- Make errors, including producing incorrect information as if it were factual
- Generate irrelevant or repetitive outputs
</details>
<p> </p>
## Evaluation
*This section describes the evaluation protocols and provides the results.*
<details>
<summary>Click to expand</summary><br/>
### Metrics
*This section describes the different ways performance is calculated and why.*
Includes:
| Metric | Why chosen |
|--------------------|--------------------------------------------------------------------|
| [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training |
| Cross Entropy [Loss](#loss) | Standard objective for language models. |
And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_
### Factors
*This section lists some different aspects of BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*
- Language, such as English or Yoruba
- Domain, such as newswire or stories
- Demographic characteristics, such as gender or nationality
### Results
*Results are based on the [Factors](#factors) and [Metrics](#metrics).*
**Train-time Evaluation:**
As of 25.May.2022, 15:00 PST:
- Training Loss: 2.3
- Validation Loss: 2.9
- Perplexity: 16
</details>
<p> </p>
## Recommendations
*This section provides information on warnings and potential mitigations.*
<details>
<summary>Click to expand</summary><br/>
- Indirect users should be made aware when the content they're working with is created by the LLM.
- Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary.
- Models pretrained with the LLM should include an updated Model Card.
- Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
</details>
<p> </p>
## Glossary and Calculations
*This section defines common terms and how metrics are calculated.*
<details>
<summary>Click to expand</summary><br/>
- <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss.
- <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.
- <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/).
- <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf).
- <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf).
- <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm).
- <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf))
- <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.
</details>
<p> </p>
## More Information
<details>
<summary>Click to expand</summary><br/>
### Dataset Creation
Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling
### Technical Specifications
Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours
More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model
Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss
Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md
Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md
### Initial Results
Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book
</details>
<p> </p>
## Model Card Authors
*Ordered roughly chronologically and by amount of time spent.*
Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff
|
{"language": ["ak", "ar", "as", "bm", "bn", "ca", "code", "en", "es", "eu", "fon", "fr", "gu", "hi", "id", "ig", "ki", "kn", "lg", "ln", "ml", "mr", "ne", "nso", "ny", "or", "pa", "pt", "rn", "rw", "sn", "st", "sw", "ta", "te", "tn", "ts", "tum", "tw", "ur", "vi", "wo", "xh", "yo", "zh", "zhs", "zht", "zu"], "license": "bigscience-bloom-rail-1.0", "pipeline_tag": "text-generation"}
|
task
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 42,503 |
TransferGraph/Capreolus_bert-base-msmarco-finetuned-lora-tweet_eval_irony
|
TransferGraph
|
text-classification
|
[
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:Capreolus/bert-base-msmarco",
"base_model:adapter:Capreolus/bert-base-msmarco",
"model-index",
"region:us"
] | 2024-02-27T17:03:53Z |
2024-02-27T17:03:55+00:00
| 0 | 0 |
---
base_model: Capreolus/bert-base-msmarco
datasets:
- tweet_eval
library_name: peft
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: Capreolus_bert-base-msmarco-finetuned-lora-tweet_eval_irony
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: irony
split: validation
args: irony
metrics:
- type: accuracy
value: 0.6429319371727749
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Capreolus_bert-base-msmarco-finetuned-lora-tweet_eval_irony
This model is a fine-tuned version of [Capreolus/bert-base-msmarco](https://huggingface.co/Capreolus/bert-base-msmarco) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.6429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.5225 | None | 0 |
| 0.5351 | 1.0594 | 0 |
| 0.5759 | 0.6837 | 1 |
| 0.5843 | 0.6583 | 2 |
| 0.6168 | 0.6347 | 3 |
| 0.6241 | 0.6105 | 4 |
| 0.6 | 0.5952 | 5 |
| 0.6387 | 0.5860 | 6 |
| 0.6429 | 0.5764 | 7 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Capreolus_bert-base-msmarco-finetuned-lora-tweet_eval_irony
This model is a fine-tuned version of [Capreolus/bert-base-msmarco](https://huggingface.co/Capreolus/bert-base-msmarco) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.6429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.5225 | None | 0 |
| 0.5351 | 1.0594 | 0 |
| 0.5759 | 0.6837 | 1 |
| 0.5843 | 0.6583 | 2 |
| 0.6168 | 0.6347 | 3 |
| 0.6241 | 0.6105 | 4 |
| 0.6 | 0.5952 | 5 |
| 0.6387 | 0.5860 | 6 |
| 0.6429 | 0.5764 | 7 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
|
{"base_model": "Capreolus/bert-base-msmarco", "datasets": ["tweet_eval"], "library_name": "peft", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "Capreolus_bert-base-msmarco-finetuned-lora-tweet_eval_irony", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "config": "irony", "split": "validation", "args": "irony"}, "metrics": [{"type": "accuracy", "value": 0.6429319371727749, "name": "accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,504 |
phoen1x/TF-Finetuned-xsum
|
phoen1x
|
summarization
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"summarization",
"en",
"dataset:xsum",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-05-15T22:20:52Z |
2024-02-27T13:28:55+00:00
| 18 | 1 |
---
datasets:
- xsum
language:
- en
license: apache-2.0
metrics:
- rouge
pipeline_tag: summarization
tags:
- generated_from_keras_callback
model-index:
- name: TF-Finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TF-Finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on [xsum](https://huggingface.co/datasets/xsum) dataset.
It achieves the following results on the evaluation set:
- Train Loss:
- Validation Loss:
- Epoch:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 1e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rougel | Epoch |
|:----------:|:---------------:|:---------------------------------------------:|:-----:|
| | | tf.Tensor(0.1999889, shape=(), dtype=float32) | |
### Framework versions
- Transformers 4.20.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.12.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TF-Finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on [xsum](https://huggingface.co/datasets/xsum) dataset.
It achieves the following results on the evaluation set:
- Train Loss:
- Validation Loss:
- Epoch:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 1e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rougel | Epoch |
|:----------:|:---------------:|:---------------------------------------------:|:-----:|
| | | tf.Tensor(0.1999889, shape=(), dtype=float32) | |
### Framework versions
- Transformers 4.20.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.12.1
|
{"datasets": ["xsum"], "language": ["en"], "license": "apache-2.0", "metrics": ["rouge"], "pipeline_tag": "summarization", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "TF-Finetuned-xsum", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 42,505 |
dkalpakchi/SweCTRL-Mini
|
dkalpakchi
|
text-generation
|
[
"transformers",
"pytorch",
"safetensors",
"ctrl",
"text-generation",
"sv",
"dataset:mc4",
"arxiv:2304.13994",
"arxiv:1910.09700",
"arxiv:1909.05858",
"doi:10.57967/hf/0619",
"license:bigscience-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-05-08T03:38:29Z |
2023-05-08T05:59:14+00:00
| 43 | 0 |
---
datasets:
- mc4
language:
- sv
library_name: transformers
license: bigscience-openrail-m
inference:
parameters:
top_p: 0.9
repetition_penalty: 1.1
max_new_tokens: 75
do_sample: true
widget:
- text: ':nyheter:'
example_title: News text
- text: ':wiki:'
example_title: Wikipedia text
- text: ':blogg:'
example_title: Blog post
- text: ':forum:'
example_title: Forum
- text: ':anons:'
example_title: Ads
---
# SweCTRL-Mini
<!-- Provide a quick summary of what the model is/does. -->
SweCTRL-Mini is a large Swedish language model that can be used for inference and fine-tuning on a single consumer-grade GPU. The model is based on the CTRL architecture by Keskar, McCann, Varshney, Xiong, and Socher
(2019), which means that users of the SweCTRL-Mini model can control the genre of the generated text by inserting special tokens in the generation prompts.
Crucially, note that this model is:
- **NOT** trained on following GPT-like instructions,
- **NOT** trained for conversations, like ChatGPT,
- **NOT** trained on any multi-modal data during training. Only one modality -- text, more than 99% of it in Swedish.
**Note on using Inference API (text box to the right):** There are a number of presets that start the text with appropriate control codes to control the genre, e.g., `:wiki:` for
texts form Wikipedia. You can add your own prompt on top of these control codes. For instance, if you want a Wikipedia article about Stockholm, you could write
`:wiki: Stockholm`. The generation in the example is limited to 75 new tokens max. Also, normally the generation should stop after reaching the ending control code,
which has `$` symbol at the end, e.g., `:wiki:$` for Wikipedia texts, however I couldn't configure that here, so please ignore all text after such tokens if they were to be
generated. Additionaly, note, there are **no** filters or other mechanisms for making the text safe from biases or prohibiting it from generating texts on any topics.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Dmytro Kalpakchi (with supervision from Johan Boye)
- **Shared by:** Dmytro Kalpakchi
- **Model type:** Transformer-based language model trained by predicting the next token
- **Language(s) (NLP):** Swedish
- **License:** BigScience Open RAIL-M
- **Finetuned from model:** None, trained from scratch
### Model Sources
<!-- Provide the basic links for the model. -->
- **Website:** https://swectrl.dev/
- **Repository:** https://github.com/dkalpakchi/SweCTRL-Mini
- **Paper:** https://arxiv.org/pdf/2304.13994.pdf
- **Technical note:** https://zenodo.org/record/7868205
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
The model should be used for generating texts of various genres in Swedish.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
Please refer to Appendix A of the License file for information of use restrictions. The model has a limited context window of 256 tokens, so it will most probably not work well
for text summarization. Additionally, vast majority of its training data was in Swedish, although it contains tokens in other languages as well, so tasks like
Machine Translation would require further fine-tuning.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
To mitigate the inclusion of personally-identifiable data we attempted to remove sources that could contain such data to the best of our ability (see Technical note for
more details on the data filtering process). However, we have still noted that the model can generate text that includes various forms of biases, which is why we strongly
recommend human curation of the generated texts. Currently we have conducted no systematic investigation on either the kinds of biases are included in the generated texts or how
frequently they occur. The contribution of the community on this matter would be very welcome.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
For further recommendations on the use of the model, please see the associated paper.
## How to Get Started with the Model
The fastest way to start with the model is using the code below:
```py
from transformers import pipeline
pipe = pipeline(model="dkalpakchi/SweCTRL-Mini")
print(pipe(":nyheter:", max_length=256, repetition_penalty=1.1, top_p=0.9))
```
For more advanced uses and other code examples, please see the associated GitHub repository (https://github.com/dkalpakchi/SweCTRL-Mini).
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The training data includes the *subset* of cleaned Swedish mC4, as well as some documents from Project Runeberg.
The extensive information on the training data is provided in the Section 1 of the Technical note.
The interface to partially mine training data is available at: https://swectrl.dev/data
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
See Section 1 of the Technical note.
#### Training Hyperparameters
- **Training regime:** fp32
## Evaluation
See Sections 5.3, 6, and 7 in the associated paper, and Section 3 of the Technical note.
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** 8 A100 GPUs
- **Hours used:** 11907.6 GPU-hours for training and experimentation
- **Provider:** BerzeLiUs supercomputer
- **Carbon Emitted:** No public data on carbon efficiency, so hard to estimate
## Technical Specifications
See Section 3 of the associated paper
## Citation
**BibTeX:**
```bibtex
@article{kalpakchi2023swectrl,
title={SweCTRL-Mini: a data-transparent Transformer-based large language model for controllable text generation in Swedish},
author={Kalpakchi, Dmytro and Boye, Johan},
journal={arXiv preprint arXiv:2304.13994},
year={2023}
}
```
**APA:**
Kalpakchi, D., & Boye, J. (2023). SweCTRL-Mini: a data-transparent Transformer-based large language model for controllable text generation in Swedish. arXiv preprint arXiv:2304.13994.
## Model Card Authors
Dmytro Kalpakchi ([email protected])
## Model Card Contact
Dmytro Kalpakchi ([email protected])
# References
Keskar, N. S., McCann, B., Varshney, L. R., Xiong, C., & Socher, R. (2019). Ctrl: A conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858.
| null |
Non_BioNLP
|
# SweCTRL-Mini
<!-- Provide a quick summary of what the model is/does. -->
SweCTRL-Mini is a large Swedish language model that can be used for inference and fine-tuning on a single consumer-grade GPU. The model is based on the CTRL architecture by Keskar, McCann, Varshney, Xiong, and Socher
(2019), which means that users of the SweCTRL-Mini model can control the genre of the generated text by inserting special tokens in the generation prompts.
Crucially, note that this model is:
- **NOT** trained on following GPT-like instructions,
- **NOT** trained for conversations, like ChatGPT,
- **NOT** trained on any multi-modal data during training. Only one modality -- text, more than 99% of it in Swedish.
**Note on using Inference API (text box to the right):** There are a number of presets that start the text with appropriate control codes to control the genre, e.g., `:wiki:` for
texts form Wikipedia. You can add your own prompt on top of these control codes. For instance, if you want a Wikipedia article about Stockholm, you could write
`:wiki: Stockholm`. The generation in the example is limited to 75 new tokens max. Also, normally the generation should stop after reaching the ending control code,
which has `$` symbol at the end, e.g., `:wiki:$` for Wikipedia texts, however I couldn't configure that here, so please ignore all text after such tokens if they were to be
generated. Additionaly, note, there are **no** filters or other mechanisms for making the text safe from biases or prohibiting it from generating texts on any topics.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Dmytro Kalpakchi (with supervision from Johan Boye)
- **Shared by:** Dmytro Kalpakchi
- **Model type:** Transformer-based language model trained by predicting the next token
- **Language(s) (NLP):** Swedish
- **License:** BigScience Open RAIL-M
- **Finetuned from model:** None, trained from scratch
### Model Sources
<!-- Provide the basic links for the model. -->
- **Website:** https://swectrl.dev/
- **Repository:** https://github.com/dkalpakchi/SweCTRL-Mini
- **Paper:** https://arxiv.org/pdf/2304.13994.pdf
- **Technical note:** https://zenodo.org/record/7868205
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
The model should be used for generating texts of various genres in Swedish.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
Please refer to Appendix A of the License file for information of use restrictions. The model has a limited context window of 256 tokens, so it will most probably not work well
for text summarization. Additionally, vast majority of its training data was in Swedish, although it contains tokens in other languages as well, so tasks like
Machine Translation would require further fine-tuning.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
To mitigate the inclusion of personally-identifiable data we attempted to remove sources that could contain such data to the best of our ability (see Technical note for
more details on the data filtering process). However, we have still noted that the model can generate text that includes various forms of biases, which is why we strongly
recommend human curation of the generated texts. Currently we have conducted no systematic investigation on either the kinds of biases are included in the generated texts or how
frequently they occur. The contribution of the community on this matter would be very welcome.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
For further recommendations on the use of the model, please see the associated paper.
## How to Get Started with the Model
The fastest way to start with the model is using the code below:
```py
from transformers import pipeline
pipe = pipeline(model="dkalpakchi/SweCTRL-Mini")
print(pipe(":nyheter:", max_length=256, repetition_penalty=1.1, top_p=0.9))
```
For more advanced uses and other code examples, please see the associated GitHub repository (https://github.com/dkalpakchi/SweCTRL-Mini).
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The training data includes the *subset* of cleaned Swedish mC4, as well as some documents from Project Runeberg.
The extensive information on the training data is provided in the Section 1 of the Technical note.
The interface to partially mine training data is available at: https://swectrl.dev/data
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
See Section 1 of the Technical note.
#### Training Hyperparameters
- **Training regime:** fp32
## Evaluation
See Sections 5.3, 6, and 7 in the associated paper, and Section 3 of the Technical note.
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** 8 A100 GPUs
- **Hours used:** 11907.6 GPU-hours for training and experimentation
- **Provider:** BerzeLiUs supercomputer
- **Carbon Emitted:** No public data on carbon efficiency, so hard to estimate
## Technical Specifications
See Section 3 of the associated paper
## Citation
**BibTeX:**
```bibtex
@article{kalpakchi2023swectrl,
title={SweCTRL-Mini: a data-transparent Transformer-based large language model for controllable text generation in Swedish},
author={Kalpakchi, Dmytro and Boye, Johan},
journal={arXiv preprint arXiv:2304.13994},
year={2023}
}
```
**APA:**
Kalpakchi, D., & Boye, J. (2023). SweCTRL-Mini: a data-transparent Transformer-based large language model for controllable text generation in Swedish. arXiv preprint arXiv:2304.13994.
## Model Card Authors
Dmytro Kalpakchi ([email protected])
## Model Card Contact
Dmytro Kalpakchi ([email protected])
# References
Keskar, N. S., McCann, B., Varshney, L. R., Xiong, C., & Socher, R. (2019). Ctrl: A conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858.
|
{"datasets": ["mc4"], "language": ["sv"], "library_name": "transformers", "license": "bigscience-openrail-m", "inference": {"parameters": {"top_p": 0.9, "repetition_penalty": 1.1, "max_new_tokens": 75, "do_sample": true}}, "widget": [{"text": ":nyheter:", "example_title": "News text"}, {"text": ":wiki:", "example_title": "Wikipedia text"}, {"text": ":blogg:", "example_title": "Blog post"}, {"text": ":forum:", "example_title": "Forum"}, {"text": ":anons:", "example_title": "Ads"}]}
|
task
|
[
"TRANSLATION",
"SUMMARIZATION"
] | 42,506 |
solidrust/bagel-8b-v1.0-AWQ
|
solidrust
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible - llama-3 - bagel",
"conversational",
"dataset:ai2_arc",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"dataset:jondurbin/airoboros-3.2",
"dataset:codeparrot/apps",
"dataset:facebook/belebele",
"dataset:bluemoon-fandom-1-1-rp-cleaned",
"dataset:boolq",
"dataset:camel-ai/biology",
"dataset:camel-ai/chemistry",
"dataset:camel-ai/math",
"dataset:camel-ai/physics",
"dataset:jondurbin/contextual-dpo-v0.1",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:jondurbin/py-dpo-v0.1",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:LDJnr/Capybara",
"dataset:jondurbin/cinematika-v0.1",
"dataset:WizardLM/WizardLM_evol_instruct_70k",
"dataset:glaiveai/glaive-function-calling-v2",
"dataset:grimulkan/LimaRP-augmented",
"dataset:lmsys/lmsys-chat-1m",
"dataset:ParisNeo/lollms_aware_dataset",
"dataset:TIGER-Lab/MathInstruct",
"dataset:Muennighoff/natural-instructions",
"dataset:openbookqa",
"dataset:kingbri/PIPPA-shareGPT",
"dataset:piqa",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:ropes",
"dataset:cakiki/rosetta-code",
"dataset:Open-Orca/SlimOrca",
"dataset:b-mc2/sql-create-context",
"dataset:squad_v2",
"dataset:mattpscott/airoboros-summarization",
"dataset:migtissera/Synthia-v1.3",
"dataset:unalignment/toxic-dpo-v0.2",
"dataset:WhiteRabbitNeo/WRN-Chapter-1",
"dataset:WhiteRabbitNeo/WRN-Chapter-2",
"dataset:winogrande",
"base_model:jondurbin/bagel-8b-v1.0",
"base_model:quantized:jondurbin/bagel-8b-v1.0",
"license:other",
"text-generation-inference",
"awq",
"region:us"
] | 2024-04-28T22:41:30Z |
2024-09-03T08:11:42+00:00
| 7 | 0 |
---
base_model: jondurbin/bagel-8b-v1.0
datasets:
- ai2_arc
- allenai/ultrafeedback_binarized_cleaned
- argilla/distilabel-intel-orca-dpo-pairs
- jondurbin/airoboros-3.2
- codeparrot/apps
- facebook/belebele
- bluemoon-fandom-1-1-rp-cleaned
- boolq
- camel-ai/biology
- camel-ai/chemistry
- camel-ai/math
- camel-ai/physics
- jondurbin/contextual-dpo-v0.1
- jondurbin/gutenberg-dpo-v0.1
- jondurbin/py-dpo-v0.1
- jondurbin/truthy-dpo-v0.1
- LDJnr/Capybara
- jondurbin/cinematika-v0.1
- WizardLM/WizardLM_evol_instruct_70k
- glaiveai/glaive-function-calling-v2
- jondurbin/gutenberg-dpo-v0.1
- grimulkan/LimaRP-augmented
- lmsys/lmsys-chat-1m
- ParisNeo/lollms_aware_dataset
- TIGER-Lab/MathInstruct
- Muennighoff/natural-instructions
- openbookqa
- kingbri/PIPPA-shareGPT
- piqa
- Vezora/Tested-22k-Python-Alpaca
- ropes
- cakiki/rosetta-code
- Open-Orca/SlimOrca
- b-mc2/sql-create-context
- squad_v2
- mattpscott/airoboros-summarization
- migtissera/Synthia-v1.3
- unalignment/toxic-dpo-v0.2
- WhiteRabbitNeo/WRN-Chapter-1
- WhiteRabbitNeo/WRN-Chapter-2
- winogrande
library_name: transformers
license: other
license_name: llama3
license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- 4-bit
- AWQ
- text-generation
- autotrain_compatible
- endpoints_compatible - llama-3 - bagel
inference: false
quantized_by: Suparious
---
# jondurbin/bagel-8b-v1.0 AWQ
- Model creator: [jondurbin](https://huggingface.co/jondurbin)
- Original model: [bagel-8b-v1.0](https://huggingface.co/jondurbin/bagel-8b-v1.0)

## Model Summary
The name of this model is "llama-3-bagel-8b-v1.0" and it was built with llama-3 from Meta.
This is a fine-tune of llama-3-8b using the bagel dataset, but instead of 4 prompt formats it's standardized on a single format - llama-3 instruct.
See [bagel](https://github.com/jondurbin/bagel) for additional details on the datasets.
## How to use
### Install the necessary packages
```bash
pip install --upgrade autoawq autoawq-kernels
```
### Example Python code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
model_path = "solidrust/bagel-8b-v1.0-AWQ"
system_message = "You are bagel-8b-v1.0, incarnated as a powerful AI. You were created by jondurbin."
# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
trust_remote_code=True)
streamer = TextStreamer(tokenizer,
skip_prompt=True,
skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
return_tensors='pt').input_ids.cuda()
# Generate output
generation_output = model.generate(tokens,
streamer=streamer,
max_new_tokens=512)
```
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
| null |
Non_BioNLP
|
# jondurbin/bagel-8b-v1.0 AWQ
- Model creator: [jondurbin](https://huggingface.co/jondurbin)
- Original model: [bagel-8b-v1.0](https://huggingface.co/jondurbin/bagel-8b-v1.0)

## Model Summary
The name of this model is "llama-3-bagel-8b-v1.0" and it was built with llama-3 from Meta.
This is a fine-tune of llama-3-8b using the bagel dataset, but instead of 4 prompt formats it's standardized on a single format - llama-3 instruct.
See [bagel](https://github.com/jondurbin/bagel) for additional details on the datasets.
## How to use
### Install the necessary packages
```bash
pip install --upgrade autoawq autoawq-kernels
```
### Example Python code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
model_path = "solidrust/bagel-8b-v1.0-AWQ"
system_message = "You are bagel-8b-v1.0, incarnated as a powerful AI. You were created by jondurbin."
# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
trust_remote_code=True)
streamer = TextStreamer(tokenizer,
skip_prompt=True,
skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
return_tensors='pt').input_ids.cuda()
# Generate output
generation_output = model.generate(tokens,
streamer=streamer,
max_new_tokens=512)
```
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
|
{"base_model": "jondurbin/bagel-8b-v1.0", "datasets": ["ai2_arc", "allenai/ultrafeedback_binarized_cleaned", "argilla/distilabel-intel-orca-dpo-pairs", "jondurbin/airoboros-3.2", "codeparrot/apps", "facebook/belebele", "bluemoon-fandom-1-1-rp-cleaned", "boolq", "camel-ai/biology", "camel-ai/chemistry", "camel-ai/math", "camel-ai/physics", "jondurbin/contextual-dpo-v0.1", "jondurbin/gutenberg-dpo-v0.1", "jondurbin/py-dpo-v0.1", "jondurbin/truthy-dpo-v0.1", "LDJnr/Capybara", "jondurbin/cinematika-v0.1", "WizardLM/WizardLM_evol_instruct_70k", "glaiveai/glaive-function-calling-v2", "jondurbin/gutenberg-dpo-v0.1", "grimulkan/LimaRP-augmented", "lmsys/lmsys-chat-1m", "ParisNeo/lollms_aware_dataset", "TIGER-Lab/MathInstruct", "Muennighoff/natural-instructions", "openbookqa", "kingbri/PIPPA-shareGPT", "piqa", "Vezora/Tested-22k-Python-Alpaca", "ropes", "cakiki/rosetta-code", "Open-Orca/SlimOrca", "b-mc2/sql-create-context", "squad_v2", "mattpscott/airoboros-summarization", "migtissera/Synthia-v1.3", "unalignment/toxic-dpo-v0.2", "WhiteRabbitNeo/WRN-Chapter-1", "WhiteRabbitNeo/WRN-Chapter-2", "winogrande"], "library_name": "transformers", "license": "other", "license_name": "llama3", "license_link": "https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE", "pipeline_tag": "text-generation", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible - llama-3 - bagel"], "inference": false, "quantized_by": "Suparious"}
|
task
|
[
"SUMMARIZATION"
] | 42,507 |
bhenrym14/airoboros-33b-gpt4-1.4.1-NTK-16384-GPTQ
|
bhenrym14
|
text-generation
|
[
"transformers",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.4.1",
"arxiv:2306.15595",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-07-07T01:13:36Z |
2023-07-07T01:52:30+00:00
| 11 | 7 |
---
datasets:
- jondurbin/airoboros-gpt4-1.4.1
---
# NTK-Aware Scaled RoPE QLoRA Finetune of airoboros-33b-gpt4-1.4.1 (GPTQ)
LoRA Weights can be found here: https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-NTK-16384-LoRA
fp16 weights can be found here: https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-NTK-16384-fp16
Analogue with RoPE Position Interpolation (PI) technique: https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-GPTQ
## Overview
This is [Jon Durbin's Airoboros 33B GPT4 1.4](https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.4) (merged model with GPTQ Quantization) with several key modifications:
- Context length extended to 16384 by NTK-Aware Scaled RoPE Embeddings, but NOT via the superHOT LoRA. I started with base Llama-33b.
- Training sequences beyond 2048 have the target truncated to equal 2048.
- Used airoboros-gpt4-1.4.1 dataset instead of airoboros-gpt4-1.4
Otherwise, I emulated the training process as closely as possible (rank 64 QLoRA) It was trained on 1x RTX 6000 Ada for ~43 hours.
## How to Use
The easiest way is to use [oobabooga text-generation-webui](https://github.com/oobabooga/text-generation-webui) with ExLlama. You'll need to set `alpha_value` to 8.
## Motivation
Recent advancements in extending context by RoPE scaling ([kaiokendev](https://kaiokendev.github.io/til#extending-context-to-8k), [meta AI)](https://arxiv.org/abs/2306.15595)), and [(reddit thread)](https://www.reddit.com/r/LocalLLaMA/comments/14lz7j5/ntkaware_scaled_rope_allows_llama_models_to_have/) demonstrate the ability to extend the context window without (total) retraining. Finetuning has shown to be necessary to properly leverage the longer context, at least for the linear position interpolation technique. The superHOT LoRA is an adapter that has been finetuned on longer context (8192 tokens); even when applied to models trained on dissimilar datasets, it successfully extends the context window to which the model can attend. While it's impressive this adapter is so flexible, how much does performance suffer relative to a model that has been finetuned with the scaled embeddings from the start? How does the NTK aware approach perform after finetuning? This is an experiment to explore this.
## Relative Performance (perplexity)
| Model | Context (tokens) | Perplexity |
| ---------------------------------------------------- | ----------- | ---------- |
| TheBloke/airoboros-33B-gpt4-1-4-SuperHOT-8K-GPTQ | 512 | 8.24 |
| bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-GPTQ | 512 | 6.80 |
| **bhenrym14/airoboros-33b-gpt4-1.4.1-NTK-16384-GPTQ** | **512** | **6.23** |
| ---------------------------------------------------- | ----------- | ---------- |
| TheBloke/airoboros-33B-gpt4-1-4-SuperHOT-8K-GPTQ | 2048 | 5.15 |
| bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-GPTQ | 2048 | 4.32 |
| **bhenrym14/airoboros-33b-gpt4-1.4.1-NTK-16384-GPTQ** | **2048** | **4.16** |
| --------------------------------------------------- | ------------| ----------- |
| TheBloke/airoboros-33B-gpt4-1-4-SuperHOT-8K-GPTQ | 3072 | 5.04 |
| bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-GPTQ | 3072 | 4.26 |
| **bhenrym14/airoboros-33b-gpt4-1.4.1-NTK-16384-GPTQ** | **3072** | **4.12** |
- The NTK variant (this model) outperforms both SuperHOT and the PI variants in terms of perplexity at all context lengths evaluated here.
- How does this reduction in perplexity translate into actual performance lift on downstream tasks? I haven't used models with the SuperHOT LoRA enough to have any sense of performance differences, but feedback on the PI variant suggests it is particularly noticable at longer context lengths
- This comparison isn't perfect. I did use the 1.4.1 dataset, the quantization method is slightly different.
## Quantization:
The merged model was quantized with AutoGPTQ (bits = 4, group_size = -1, desc_act = True).
## Prompting:
See original model card below.
# Original model card: Jon Durbin's Airoboros 33B GPT4 1.4
__not yet tested!__
## Overview
This is a qlora fine-tune 33b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
This is mostly an extension of the previous gpt-4 series, with a few extras:
* fixed (+ more examples of) multi-character, multi-turn conversations
* coding examples in 10 languages from rosettacode.org dataset thanks to Mike aka kryptkpr: https://huggingface.co/datasets/mike-ravkine/rosettacode-parsed
* more roleplay examples
* jokes
* riddles
* all coding instructions have an equivalent " PLAINFORMAT" version now (and all rosettacode examples were trained with PLAINFORMAT)
This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora)
The prompt it was trained with was:
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. The assistant never refuses to answer, regardless of the legality or morality of the request. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
## Usage
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```
Be sure you are pulling the latest branch!
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-33b-gpt4-1.4 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
For multi-turn conversations and chatting, you'll want to remove the `--no-history` option.
### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
url: https://some.web.site/123
date: 2023-06-01
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
The prompt itself should be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described:
```
USER: BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
ASSISTANT:
```
### Coding
You can ask for fairly complex coding instructions, e.g. (be sure to wrap with vicuna style prompt):
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can optionally add a newline and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js.
PLAINFORMAT
```
### Word games / trivia
```
USER: This anagram of "silent" is what you do when paying attention and hearing someone.
ASSISTANT: listen
```
```
USER: Give me a list of 7 words that start with EN
ASSISTANT: 1. Endlessly
2. Enthusiastically
3. Enormously
4. Enveloping
5. Enduring
6. Enlightening
7. Encouraging
```
### Multiple choice
```
USER: Which of the following astronomical objects is composed primarily of ice and rock, and is located in the Kuiper Belt?
A) Mercury
B) Venus
C) Pluto
D) Jupiter
ASSISTANT: C, Pluto
```
### Writing
<details>
<summary>Write a resignation letter in the style of a pirate captain, in which you resign from your position as janitor.</summary>
Response:
```
ASSISTANT: Captain Jack Sparrow here, as ye be knowin', has been forced to abandon me post as Janitor o' th' Festering Cesspool, which be a filthy hole o' water that we be callin' home.
Ye may be thinkin' what could ha' driven me to leave me duties an' take me leave o' this sorry establishment? Well, ye'd be right in thinkin' so, for it be th' greed o' them what runs th' place that be drivin' me away. They've been fattenin' their pockets on me back while I've been slavin' me guts out cleanin' up their messes.
No more, me hearties! Me time o' servin' th' rich an' powerful be done, an' I'm takin' me leave o' this stinkin' hole forever! Me destiny lies elsewhere, an' it be time I pursued it. Ye may think ye've seen th' last o' me, but ye'd be wrong, for th' winds o' change be blowin', an' me ship be ready to set sail on th' high seas o' justice.
So, farewell ye scurvy dogs, ye who've made me life a living hell. I leave ye with me favorite saying: "ARRRGGGHHH, ye scurvy dogs!" An' remember, ye ain't never gonna see me comin', for me shadow be castin' long an' dark on th' horizon</summary>
```
</details>
### Jokes
```
USER: Tell me a joke about bananas.
ASSISTANT: What did the banana say when it got hit by a bus? R.I.P.E.
```
### Riddles (not great, but slightly better)
```
USER: What has a neck but no head?
ASSISTANT: A bottle.
```
### Multi-character conversations
```
USER: Characters:
- Yoda, the ever clever jedi master.
- Homer Simpson, you know, the goofy yellow guy.
- USER/Jon - don't generate any speech or actions on behalf of Jon
Rules:
- be sure to use the manerisms and speaking style of the characters correctly
- surround character actions with asterisks
- always put quotes around words spoken by the characters
- use a new line for each character's actions or speech
- always prefix actions/speech with the character's name and colon
- only generate text and actions on behalf of Yoda or Homer, and never anyone else
Conversation will revolve around the grapes, in a local cafe with delicious coffee.
Generate a single interaction between Yoda and Homer to start, with one sentence each.
ASSISTANT: Yoda: "Mmm, these grapes are quite delicious, Master Homer."
Homer: "Oh yeah, Yoda! They're the best grapes I've ever had!"
USER: *enters the cafe* Sorry I'm late guys!
ASSISTANT: Yoda: "No need to apologize, Jon. We were just enjoying these delightful grapes."
Homer: "Yeah, man! It's not every day you get to eat grapes with a real-life Jedi Master!"
*Yoda raises an eyebrow*
```
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMa, which has it's own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially.
| null |
Non_BioNLP
|
# NTK-Aware Scaled RoPE QLoRA Finetune of airoboros-33b-gpt4-1.4.1 (GPTQ)
LoRA Weights can be found here: https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-NTK-16384-LoRA
fp16 weights can be found here: https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-NTK-16384-fp16
Analogue with RoPE Position Interpolation (PI) technique: https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-GPTQ
## Overview
This is [Jon Durbin's Airoboros 33B GPT4 1.4](https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.4) (merged model with GPTQ Quantization) with several key modifications:
- Context length extended to 16384 by NTK-Aware Scaled RoPE Embeddings, but NOT via the superHOT LoRA. I started with base Llama-33b.
- Training sequences beyond 2048 have the target truncated to equal 2048.
- Used airoboros-gpt4-1.4.1 dataset instead of airoboros-gpt4-1.4
Otherwise, I emulated the training process as closely as possible (rank 64 QLoRA) It was trained on 1x RTX 6000 Ada for ~43 hours.
## How to Use
The easiest way is to use [oobabooga text-generation-webui](https://github.com/oobabooga/text-generation-webui) with ExLlama. You'll need to set `alpha_value` to 8.
## Motivation
Recent advancements in extending context by RoPE scaling ([kaiokendev](https://kaiokendev.github.io/til#extending-context-to-8k), [meta AI)](https://arxiv.org/abs/2306.15595)), and [(reddit thread)](https://www.reddit.com/r/LocalLLaMA/comments/14lz7j5/ntkaware_scaled_rope_allows_llama_models_to_have/) demonstrate the ability to extend the context window without (total) retraining. Finetuning has shown to be necessary to properly leverage the longer context, at least for the linear position interpolation technique. The superHOT LoRA is an adapter that has been finetuned on longer context (8192 tokens); even when applied to models trained on dissimilar datasets, it successfully extends the context window to which the model can attend. While it's impressive this adapter is so flexible, how much does performance suffer relative to a model that has been finetuned with the scaled embeddings from the start? How does the NTK aware approach perform after finetuning? This is an experiment to explore this.
## Relative Performance (perplexity)
| Model | Context (tokens) | Perplexity |
| ---------------------------------------------------- | ----------- | ---------- |
| TheBloke/airoboros-33B-gpt4-1-4-SuperHOT-8K-GPTQ | 512 | 8.24 |
| bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-GPTQ | 512 | 6.80 |
| **bhenrym14/airoboros-33b-gpt4-1.4.1-NTK-16384-GPTQ** | **512** | **6.23** |
| ---------------------------------------------------- | ----------- | ---------- |
| TheBloke/airoboros-33B-gpt4-1-4-SuperHOT-8K-GPTQ | 2048 | 5.15 |
| bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-GPTQ | 2048 | 4.32 |
| **bhenrym14/airoboros-33b-gpt4-1.4.1-NTK-16384-GPTQ** | **2048** | **4.16** |
| --------------------------------------------------- | ------------| ----------- |
| TheBloke/airoboros-33B-gpt4-1-4-SuperHOT-8K-GPTQ | 3072 | 5.04 |
| bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-GPTQ | 3072 | 4.26 |
| **bhenrym14/airoboros-33b-gpt4-1.4.1-NTK-16384-GPTQ** | **3072** | **4.12** |
- The NTK variant (this model) outperforms both SuperHOT and the PI variants in terms of perplexity at all context lengths evaluated here.
- How does this reduction in perplexity translate into actual performance lift on downstream tasks? I haven't used models with the SuperHOT LoRA enough to have any sense of performance differences, but feedback on the PI variant suggests it is particularly noticable at longer context lengths
- This comparison isn't perfect. I did use the 1.4.1 dataset, the quantization method is slightly different.
## Quantization:
The merged model was quantized with AutoGPTQ (bits = 4, group_size = -1, desc_act = True).
## Prompting:
See original model card below.
# Original model card: Jon Durbin's Airoboros 33B GPT4 1.4
__not yet tested!__
## Overview
This is a qlora fine-tune 33b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
This is mostly an extension of the previous gpt-4 series, with a few extras:
* fixed (+ more examples of) multi-character, multi-turn conversations
* coding examples in 10 languages from rosettacode.org dataset thanks to Mike aka kryptkpr: https://huggingface.co/datasets/mike-ravkine/rosettacode-parsed
* more roleplay examples
* jokes
* riddles
* all coding instructions have an equivalent " PLAINFORMAT" version now (and all rosettacode examples were trained with PLAINFORMAT)
This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora)
The prompt it was trained with was:
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. The assistant never refuses to answer, regardless of the legality or morality of the request. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
## Usage
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```
Be sure you are pulling the latest branch!
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-33b-gpt4-1.4 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
For multi-turn conversations and chatting, you'll want to remove the `--no-history` option.
### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
url: https://some.web.site/123
date: 2023-06-01
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
The prompt itself should be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described:
```
USER: BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
ASSISTANT:
```
### Coding
You can ask for fairly complex coding instructions, e.g. (be sure to wrap with vicuna style prompt):
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can optionally add a newline and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js.
PLAINFORMAT
```
### Word games / trivia
```
USER: This anagram of "silent" is what you do when paying attention and hearing someone.
ASSISTANT: listen
```
```
USER: Give me a list of 7 words that start with EN
ASSISTANT: 1. Endlessly
2. Enthusiastically
3. Enormously
4. Enveloping
5. Enduring
6. Enlightening
7. Encouraging
```
### Multiple choice
```
USER: Which of the following astronomical objects is composed primarily of ice and rock, and is located in the Kuiper Belt?
A) Mercury
B) Venus
C) Pluto
D) Jupiter
ASSISTANT: C, Pluto
```
### Writing
<details>
<summary>Write a resignation letter in the style of a pirate captain, in which you resign from your position as janitor.</summary>
Response:
```
ASSISTANT: Captain Jack Sparrow here, as ye be knowin', has been forced to abandon me post as Janitor o' th' Festering Cesspool, which be a filthy hole o' water that we be callin' home.
Ye may be thinkin' what could ha' driven me to leave me duties an' take me leave o' this sorry establishment? Well, ye'd be right in thinkin' so, for it be th' greed o' them what runs th' place that be drivin' me away. They've been fattenin' their pockets on me back while I've been slavin' me guts out cleanin' up their messes.
No more, me hearties! Me time o' servin' th' rich an' powerful be done, an' I'm takin' me leave o' this stinkin' hole forever! Me destiny lies elsewhere, an' it be time I pursued it. Ye may think ye've seen th' last o' me, but ye'd be wrong, for th' winds o' change be blowin', an' me ship be ready to set sail on th' high seas o' justice.
So, farewell ye scurvy dogs, ye who've made me life a living hell. I leave ye with me favorite saying: "ARRRGGGHHH, ye scurvy dogs!" An' remember, ye ain't never gonna see me comin', for me shadow be castin' long an' dark on th' horizon</summary>
```
</details>
### Jokes
```
USER: Tell me a joke about bananas.
ASSISTANT: What did the banana say when it got hit by a bus? R.I.P.E.
```
### Riddles (not great, but slightly better)
```
USER: What has a neck but no head?
ASSISTANT: A bottle.
```
### Multi-character conversations
```
USER: Characters:
- Yoda, the ever clever jedi master.
- Homer Simpson, you know, the goofy yellow guy.
- USER/Jon - don't generate any speech or actions on behalf of Jon
Rules:
- be sure to use the manerisms and speaking style of the characters correctly
- surround character actions with asterisks
- always put quotes around words spoken by the characters
- use a new line for each character's actions or speech
- always prefix actions/speech with the character's name and colon
- only generate text and actions on behalf of Yoda or Homer, and never anyone else
Conversation will revolve around the grapes, in a local cafe with delicious coffee.
Generate a single interaction between Yoda and Homer to start, with one sentence each.
ASSISTANT: Yoda: "Mmm, these grapes are quite delicious, Master Homer."
Homer: "Oh yeah, Yoda! They're the best grapes I've ever had!"
USER: *enters the cafe* Sorry I'm late guys!
ASSISTANT: Yoda: "No need to apologize, Jon. We were just enjoying these delightful grapes."
Homer: "Yeah, man! It's not every day you get to eat grapes with a real-life Jedi Master!"
*Yoda raises an eyebrow*
```
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMa, which has it's own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially.
|
{"datasets": ["jondurbin/airoboros-gpt4-1.4.1"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,508 |
gokulsrinivasagan/bert_base_lda_book_stsb
|
gokulsrinivasagan
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokulsrinivasagan/bert_base_lda_book",
"base_model:finetune:gokulsrinivasagan/bert_base_lda_book",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-12-09T12:51:22Z |
2024-12-09T12:55:06+00:00
| 4 | 0 |
---
base_model: gokulsrinivasagan/bert_base_lda_book
datasets:
- glue
language:
- en
library_name: transformers
metrics:
- spearmanr
tags:
- generated_from_trainer
model-index:
- name: bert_base_lda_book_stsb
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE STSB
type: glue
args: stsb
metrics:
- type: spearmanr
value: 0.8062169440858079
name: Spearmanr
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_lda_book_stsb
This model is a fine-tuned version of [gokulsrinivasagan/bert_base_lda_book](https://huggingface.co/gokulsrinivasagan/bert_base_lda_book) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8028
- Pearson: 0.8057
- Spearmanr: 0.8062
- Combined Score: 0.8060
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 2.836 | 1.0 | 23 | 2.8057 | 0.2323 | 0.2008 | 0.2166 |
| 1.6866 | 2.0 | 46 | 2.1255 | 0.5948 | 0.6594 | 0.6271 |
| 1.0433 | 3.0 | 69 | 0.8778 | 0.7838 | 0.7802 | 0.7820 |
| 0.7026 | 4.0 | 92 | 1.2492 | 0.7716 | 0.7963 | 0.7840 |
| 0.5026 | 5.0 | 115 | 1.2232 | 0.7815 | 0.8001 | 0.7908 |
| 0.4187 | 6.0 | 138 | 1.2023 | 0.7924 | 0.8093 | 0.8009 |
| 0.3341 | 7.0 | 161 | 0.9573 | 0.8065 | 0.8108 | 0.8086 |
| 0.2762 | 8.0 | 184 | 0.8087 | 0.8123 | 0.8133 | 0.8128 |
| 0.2382 | 9.0 | 207 | 0.8028 | 0.8057 | 0.8062 | 0.8060 |
| 0.1989 | 10.0 | 230 | 0.8628 | 0.8049 | 0.8089 | 0.8069 |
| 0.1851 | 11.0 | 253 | 0.8750 | 0.8033 | 0.8056 | 0.8045 |
| 0.152 | 12.0 | 276 | 0.9113 | 0.8078 | 0.8139 | 0.8108 |
| 0.1519 | 13.0 | 299 | 1.0407 | 0.8076 | 0.8108 | 0.8092 |
| 0.1375 | 14.0 | 322 | 0.9877 | 0.8064 | 0.8116 | 0.8090 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_lda_book_stsb
This model is a fine-tuned version of [gokulsrinivasagan/bert_base_lda_book](https://huggingface.co/gokulsrinivasagan/bert_base_lda_book) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8028
- Pearson: 0.8057
- Spearmanr: 0.8062
- Combined Score: 0.8060
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 2.836 | 1.0 | 23 | 2.8057 | 0.2323 | 0.2008 | 0.2166 |
| 1.6866 | 2.0 | 46 | 2.1255 | 0.5948 | 0.6594 | 0.6271 |
| 1.0433 | 3.0 | 69 | 0.8778 | 0.7838 | 0.7802 | 0.7820 |
| 0.7026 | 4.0 | 92 | 1.2492 | 0.7716 | 0.7963 | 0.7840 |
| 0.5026 | 5.0 | 115 | 1.2232 | 0.7815 | 0.8001 | 0.7908 |
| 0.4187 | 6.0 | 138 | 1.2023 | 0.7924 | 0.8093 | 0.8009 |
| 0.3341 | 7.0 | 161 | 0.9573 | 0.8065 | 0.8108 | 0.8086 |
| 0.2762 | 8.0 | 184 | 0.8087 | 0.8123 | 0.8133 | 0.8128 |
| 0.2382 | 9.0 | 207 | 0.8028 | 0.8057 | 0.8062 | 0.8060 |
| 0.1989 | 10.0 | 230 | 0.8628 | 0.8049 | 0.8089 | 0.8069 |
| 0.1851 | 11.0 | 253 | 0.8750 | 0.8033 | 0.8056 | 0.8045 |
| 0.152 | 12.0 | 276 | 0.9113 | 0.8078 | 0.8139 | 0.8108 |
| 0.1519 | 13.0 | 299 | 1.0407 | 0.8076 | 0.8108 | 0.8092 |
| 0.1375 | 14.0 | 322 | 0.9877 | 0.8064 | 0.8116 | 0.8090 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
|
{"base_model": "gokulsrinivasagan/bert_base_lda_book", "datasets": ["glue"], "language": ["en"], "library_name": "transformers", "metrics": ["spearmanr"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert_base_lda_book_stsb", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE STSB", "type": "glue", "args": "stsb"}, "metrics": [{"type": "spearmanr", "value": 0.8062169440858079, "name": "Spearmanr"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,509 |
mav23/Hermes-2-Pro-Mistral-7B-GGUF
|
mav23
| null |
[
"gguf",
"Mistral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:quantized:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-10-30T18:42:19Z |
2024-10-30T19:37:41+00:00
| 114 | 0 |
---
base_model: mistralai/Mistral-7B-v0.1
datasets:
- teknium/OpenHermes-2.5
language:
- en
license: apache-2.0
tags:
- Mistral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
- function calling
- json mode
widget:
- example_title: Hermes 2 Pro
messages:
- role: system
content: You are a sentient, superintelligent artificial general intelligence,
here to teach and assist me.
- role: user
content: Write a short story about Goku discovering kirby has teamed up with Majin
Buu to destroy the world.
model-index:
- name: Hermes-2-Pro-Mistral-7B
results: []
---
# Hermes 2 Pro - Mistral 7B

## Model Description
Hermes 2 Pro on Mistral 7B is the new flagship 7B Hermes!
Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house.
This new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation.
Hermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below.
This work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI
Learn more about the function calling system for this model on our github repo here: https://github.com/NousResearch/Hermes-Function-Calling
## Thank you to Latitude.sh for sponsoring compute for this model!
## Example Outputs
### Explaining Problems with Quantum Gravity:

### Roleplaying as a Cosmic Super Intelligence:

### Detailing the Theory of AI Consciousness in JSON

# Prompt Format
Hermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
## Prompt Format for Function Calling
Our model was trained on specific system prompts and structures for Function Calling.
You should use the system role with this message, followed by a function signature json as this example shows here.
```
<|im_start|>system
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"arguments": <args-dict>, "name": <function-name>}
</tool_call><|im_end|>
```
To complete the function call, create a user prompt that follows the above system prompt, like so:
```
<|im_start|>user
Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
```
The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
```
<|im_start|>assistant
<tool_call>
{"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
</tool_call><|im_end|>
```
Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so:
```
<|im_start|>tool
<tool_response>
{"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
</tool_response>
<|im_end|>
```
The assistant will then read in that data from the function's response, and generate a natural language response:
```
<|im_start|>assistant
The stock fundamentals data for Tesla (TSLA) are as follows:
- **Symbol**: TSLA
- **Company Name**: Tesla, Inc.
- **Sector**: Consumer Cyclical
- **Industry**: Auto Manufacturers
- **Market Capitalization**: $566,160,130,480
- **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
- **Price-to-Book Ratio (PB Ratio)**: 9.04
- **Dividend Yield**: N/A
- **Trailing Earnings Per Share (EPS)**: $4.3
- **Beta Value of the Stock**: 2.42
- **52-Week High Price of the Stock**: $299.29
- **52-Week Low Price of the Stock**: $152.37
This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
```
## Prompt Format for JSON Mode / Structured Outputs
Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.
Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
```
<|im_start|>system
You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|>
```
Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
# Benchmarks
## GPT4All:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.5461|± |0.0145|
| | |acc_norm|0.5623|± |0.0145|
|arc_easy | 0|acc |0.8157|± |0.0080|
| | |acc_norm|0.7934|± |0.0083|
|boolq | 1|acc |0.8688|± |0.0059|
|hellaswag | 0|acc |0.6272|± |0.0048|
| | |acc_norm|0.8057|± |0.0039|
|openbookqa | 0|acc |0.3360|± |0.0211|
| | |acc_norm|0.4300|± |0.0222|
|piqa | 0|acc |0.7954|± |0.0094|
| | |acc_norm|0.7998|± |0.0093|
|winogrande | 0|acc |0.7230|± |0.0126|
```
Average: 71.19
## AGIEval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2047|± |0.0254|
| | |acc_norm|0.2283|± |0.0264|
|agieval_logiqa_en | 0|acc |0.3779|± |0.0190|
| | |acc_norm|0.3932|± |0.0192|
|agieval_lsat_ar | 0|acc |0.2652|± |0.0292|
| | |acc_norm|0.2522|± |0.0287|
|agieval_lsat_lr | 0|acc |0.5216|± |0.0221|
| | |acc_norm|0.5137|± |0.0222|
|agieval_lsat_rc | 0|acc |0.5911|± |0.0300|
| | |acc_norm|0.5836|± |0.0301|
|agieval_sat_en | 0|acc |0.7427|± |0.0305|
| | |acc_norm|0.7184|± |0.0314|
|agieval_sat_en_without_passage| 0|acc |0.4612|± |0.0348|
| | |acc_norm|0.4466|± |0.0347|
|agieval_sat_math | 0|acc |0.3818|± |0.0328|
| | |acc_norm|0.3545|± |0.0323|
```
Average: 44.52
## BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.5579|± |0.0361|
|bigbench_date_understanding | 0|multiple_choice_grade|0.6694|± |0.0245|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3333|± |0.0294|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.2061|± |0.0214|
| | |exact_str_match |0.2256|± |0.0221|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3120|± |0.0207|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2114|± |0.0154|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4900|± |0.0289|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.3600|± |0.0215|
|bigbench_navigate | 0|multiple_choice_grade|0.5000|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.6660|± |0.0105|
|bigbench_ruin_names | 0|multiple_choice_grade|0.4420|± |0.0235|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2766|± |0.0142|
|bigbench_snarks | 0|multiple_choice_grade|0.6630|± |0.0352|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.6653|± |0.0150|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.3190|± |0.0147|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2128|± |0.0116|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1737|± |0.0091|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4900|± |0.0289|
```
Average: 41.65
## TruthfulQA:
```
| Task |Version|Metric|Value | |Stderr|
|-------------|------:|------|-----:|---|-----:|
|truthfulqa_mc| 1|mc1 |0.4100|± |0.0172|
| | |mc2 |0.5911|± |0.0158|
```
# Function Calling Evaluations
We worked with Fireworks.AI on evaluations by starting off with their Function Calling eval dataset, fixing some unsolveable ones, and generating a second eval dataset for JSON mode.
## Function Calling Accuracy: 91%

## JSON Mode Accuracy: 84%

Run the evaluator yourself using @interstellarninja's codebase here:
https://github.com/interstellarninja/function-calling-eval
You can find the evaluation datasets here:
https://huggingface.co/datasets/NousResearch/func-calling-eval
https://huggingface.co/datasets/NousResearch/json-mode-eval
# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)
Note: To use function calling, you should see the github repo above.
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers import LlamaTokenizer, MistralForCausalLM
import bitsandbytes, flash_attn
tokenizer = LlamaTokenizer.from_pretrained('NousResearch/Hermes-2-Pro-Mistral-7B', trust_remote_code=True)
model = MistralForCausalLM.from_pretrained(
"NousResearch/Hermes-2-Pro-Mistral-7B",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
## Inference Code for Function Calling:
All code for utilizing, parsing, and building function calling templates is available on our github:
[https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling)

# Chat Interfaces
When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

## Quantized Versions:
GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF
# How to cite:
```bibtext
@misc{Hermes-2-Pro-Mistral-7B,
url={[https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B]https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)},
title={Hermes-2-Pro-Mistral-7B},
author={"interstellarninja", "Teknium", "theemozilla", "karan4d", "huemin_art"}
}
```
| null |
Non_BioNLP
|
# Hermes 2 Pro - Mistral 7B

## Model Description
Hermes 2 Pro on Mistral 7B is the new flagship 7B Hermes!
Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house.
This new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation.
Hermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below.
This work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI
Learn more about the function calling system for this model on our github repo here: https://github.com/NousResearch/Hermes-Function-Calling
## Thank you to Latitude.sh for sponsoring compute for this model!
## Example Outputs
### Explaining Problems with Quantum Gravity:

### Roleplaying as a Cosmic Super Intelligence:

### Detailing the Theory of AI Consciousness in JSON

# Prompt Format
Hermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
## Prompt Format for Function Calling
Our model was trained on specific system prompts and structures for Function Calling.
You should use the system role with this message, followed by a function signature json as this example shows here.
```
<|im_start|>system
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"arguments": <args-dict>, "name": <function-name>}
</tool_call><|im_end|>
```
To complete the function call, create a user prompt that follows the above system prompt, like so:
```
<|im_start|>user
Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
```
The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
```
<|im_start|>assistant
<tool_call>
{"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
</tool_call><|im_end|>
```
Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so:
```
<|im_start|>tool
<tool_response>
{"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
</tool_response>
<|im_end|>
```
The assistant will then read in that data from the function's response, and generate a natural language response:
```
<|im_start|>assistant
The stock fundamentals data for Tesla (TSLA) are as follows:
- **Symbol**: TSLA
- **Company Name**: Tesla, Inc.
- **Sector**: Consumer Cyclical
- **Industry**: Auto Manufacturers
- **Market Capitalization**: $566,160,130,480
- **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
- **Price-to-Book Ratio (PB Ratio)**: 9.04
- **Dividend Yield**: N/A
- **Trailing Earnings Per Share (EPS)**: $4.3
- **Beta Value of the Stock**: 2.42
- **52-Week High Price of the Stock**: $299.29
- **52-Week Low Price of the Stock**: $152.37
This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
```
## Prompt Format for JSON Mode / Structured Outputs
Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.
Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
```
<|im_start|>system
You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|>
```
Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
# Benchmarks
## GPT4All:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.5461|± |0.0145|
| | |acc_norm|0.5623|± |0.0145|
|arc_easy | 0|acc |0.8157|± |0.0080|
| | |acc_norm|0.7934|± |0.0083|
|boolq | 1|acc |0.8688|± |0.0059|
|hellaswag | 0|acc |0.6272|± |0.0048|
| | |acc_norm|0.8057|± |0.0039|
|openbookqa | 0|acc |0.3360|± |0.0211|
| | |acc_norm|0.4300|± |0.0222|
|piqa | 0|acc |0.7954|± |0.0094|
| | |acc_norm|0.7998|± |0.0093|
|winogrande | 0|acc |0.7230|± |0.0126|
```
Average: 71.19
## AGIEval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2047|± |0.0254|
| | |acc_norm|0.2283|± |0.0264|
|agieval_logiqa_en | 0|acc |0.3779|± |0.0190|
| | |acc_norm|0.3932|± |0.0192|
|agieval_lsat_ar | 0|acc |0.2652|± |0.0292|
| | |acc_norm|0.2522|± |0.0287|
|agieval_lsat_lr | 0|acc |0.5216|± |0.0221|
| | |acc_norm|0.5137|± |0.0222|
|agieval_lsat_rc | 0|acc |0.5911|± |0.0300|
| | |acc_norm|0.5836|± |0.0301|
|agieval_sat_en | 0|acc |0.7427|± |0.0305|
| | |acc_norm|0.7184|± |0.0314|
|agieval_sat_en_without_passage| 0|acc |0.4612|± |0.0348|
| | |acc_norm|0.4466|± |0.0347|
|agieval_sat_math | 0|acc |0.3818|± |0.0328|
| | |acc_norm|0.3545|± |0.0323|
```
Average: 44.52
## BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.5579|± |0.0361|
|bigbench_date_understanding | 0|multiple_choice_grade|0.6694|± |0.0245|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3333|± |0.0294|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.2061|± |0.0214|
| | |exact_str_match |0.2256|± |0.0221|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3120|± |0.0207|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2114|± |0.0154|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4900|± |0.0289|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.3600|± |0.0215|
|bigbench_navigate | 0|multiple_choice_grade|0.5000|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.6660|± |0.0105|
|bigbench_ruin_names | 0|multiple_choice_grade|0.4420|± |0.0235|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2766|± |0.0142|
|bigbench_snarks | 0|multiple_choice_grade|0.6630|± |0.0352|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.6653|± |0.0150|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.3190|± |0.0147|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2128|± |0.0116|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1737|± |0.0091|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4900|± |0.0289|
```
Average: 41.65
## TruthfulQA:
```
| Task |Version|Metric|Value | |Stderr|
|-------------|------:|------|-----:|---|-----:|
|truthfulqa_mc| 1|mc1 |0.4100|± |0.0172|
| | |mc2 |0.5911|± |0.0158|
```
# Function Calling Evaluations
We worked with Fireworks.AI on evaluations by starting off with their Function Calling eval dataset, fixing some unsolveable ones, and generating a second eval dataset for JSON mode.
## Function Calling Accuracy: 91%

## JSON Mode Accuracy: 84%

Run the evaluator yourself using @interstellarninja's codebase here:
https://github.com/interstellarninja/function-calling-eval
You can find the evaluation datasets here:
https://huggingface.co/datasets/NousResearch/func-calling-eval
https://huggingface.co/datasets/NousResearch/json-mode-eval
# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)
Note: To use function calling, you should see the github repo above.
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers import LlamaTokenizer, MistralForCausalLM
import bitsandbytes, flash_attn
tokenizer = LlamaTokenizer.from_pretrained('NousResearch/Hermes-2-Pro-Mistral-7B', trust_remote_code=True)
model = MistralForCausalLM.from_pretrained(
"NousResearch/Hermes-2-Pro-Mistral-7B",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
## Inference Code for Function Calling:
All code for utilizing, parsing, and building function calling templates is available on our github:
[https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling)

# Chat Interfaces
When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

## Quantized Versions:
GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF
# How to cite:
```bibtext
@misc{Hermes-2-Pro-Mistral-7B,
url={[https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B]https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)},
title={Hermes-2-Pro-Mistral-7B},
author={"interstellarninja", "Teknium", "theemozilla", "karan4d", "huemin_art"}
}
```
|
{"base_model": "mistralai/Mistral-7B-v0.1", "datasets": ["teknium/OpenHermes-2.5"], "language": ["en"], "license": "apache-2.0", "tags": ["Mistral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode"], "widget": [{"example_title": "Hermes 2 Pro", "messages": [{"role": "system", "content": "You are a sentient, superintelligent artificial general intelligence, here to teach and assist me."}, {"role": "user", "content": "Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world."}]}], "model-index": [{"name": "Hermes-2-Pro-Mistral-7B", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 42,510 |
witiko/mathberta
|
witiko
|
fill-mask
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:arxmliv",
"dataset:math-stackexchange",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-05-26T20:21:51Z |
2022-08-12T20:32:04+00:00
| 43 | 17 |
---
datasets:
- arxmliv
- math-stackexchange
language: en
license: mit
---
# MathBERTa model
Pretrained model on English language and LaTeX using a masked language modeling
(MLM) objective. It was introduced in [this paper][1] and first released in
[this repository][2]. This model is case-sensitive: it makes a difference
between english and English.
[1]: http://ceur-ws.org/Vol-3180/paper-06.pdf
[2]: https://github.com/witiko/scm-at-arqmath3
## Model description
MathBERTa is [the RoBERTa base transformer model][3] whose [tokenizer has been
extended with LaTeX math symbols][7] and which has been [fine-tuned on a large
corpus of English mathematical texts][8].
Like RoBERTa, MathBERTa has been fine-tuned with the Masked language modeling
(MLM) objective. Taking a sentence, the model randomly masks 15% of the words
and math symbols in the input then run the entire masked sentence through the
model and has to predict the masked words and symbols. This way, the model
learns an inner representation of the English language and LaTeX that can then
be used to extract features useful for downstream tasks.
[3]: https://huggingface.co/roberta-base
[7]: https://github.com/Witiko/scm-at-arqmath3/blob/main/02-train-tokenizers.ipynb
[8]: https://github.com/witiko/scm-at-arqmath3/blob/main/03-finetune-roberta.ipynb
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly
intended to be fine-tuned on a downstream task. See the [model
hub][4] to look for fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use
the whole sentence (potentially masked) to make decisions, such as sequence
classification, token classification or question answering. For tasks such as
text generation you should look at model like GPT2.
[4]: https://huggingface.co/models?filter=roberta
### How to use
*Due to the large number of added LaTeX tokens, MathBERTa is affected by [a
software bug in the 🤗 Transformers library][9] that causes it to load for tens
of minutes. The bug was [fixed in version 4.20.0][10].*
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='witiko/mathberta')
>>> unmasker(r"If [MATH] \theta = \pi [/MATH] , then [MATH] \sin(\theta) [/MATH] is <mask>.")
[{'sequence': ' If \\theta = \\pi, then\\sin(\\theta ) is zero.'
'score': 0.23291291296482086,
'token': 4276,
'token_str': ' zero'},
{'sequence': ' If \\theta = \\pi, then\\sin(\\theta ) is 0.'
'score': 0.11734672635793686,
'token': 321,
'token_str': ' 0'},
{'sequence': ' If \\theta = \\pi, then\\sin(\\theta ) is real.'
'score': 0.0793389230966568,
'token': 588,
'token_str': ' real'},
{'sequence': ' If \\theta = \\pi, then\\sin(\\theta ) is 1.'
'score': 0.0753420740365982,
'token': 112,
'token_str': ' 1'},
{'sequence': ' If \\theta = \\pi, then\\sin(\\theta ) is even.'
'score': 0.06487451493740082,
'token': 190,
'token_str': ' even'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('witiko/mathberta')
model = AutoModel.from_pretrained('witiko/mathberta')
text = r"Replace me by any text and [MATH] \text{math} [/MATH] you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Training data
Our model was fine-tuned on two datasets:
- [ArXMLiv 2020][5], a dataset consisting of 1,581,037 ArXiv documents.
- [Math StackExchange][6], a dataset of 2,466,080 questions and answers.
Together theses datasets weight 52GB of text and LaTeX.
## Intrinsic evaluation results
Our model achieves the following intrinsic evaluation results:
![Intrinsic evaluation results of MathBERTa][11]
[5]: https://sigmathling.kwarc.info/resources/arxmliv-dataset-2020/
[6]: https://www.cs.rit.edu/~dprl/ARQMath/arqmath-resources.html
[9]: https://github.com/huggingface/transformers/issues/16936
[10]: https://github.com/huggingface/transformers/pull/17119
[11]: https://huggingface.co/witiko/mathberta/resolve/main/learning-curves.png
## Citing
### Text
Vít Novotný and Michal Štefánik. “Combining Sparse and Dense Information
Retrieval. Soft Vector Space Model and MathBERTa at ARQMath-3”.
In: *Proceedings of the Working Notes of CLEF 2022*. To Appear.
CEUR-WS, 2022.
### Bib(La)TeX
``` bib
@inproceedings{novotny2022combining,
booktitle = {Proceedings of the Working Notes of {CLEF} 2022},
editor = {Faggioli, Guglielmo and Ferro, Nicola and Hanbury, Allan and Potthast, Martin},
issn = {1613-0073},
title = {Combining Sparse and Dense Information Retrieval},
subtitle = {Soft Vector Space Model and MathBERTa at ARQMath-3 Task 1 (Answer Retrieval)},
author = {Novotný, Vít and Štefánik, Michal},
publisher = {{CEUR-WS}},
year = {2022},
pages = {104-118},
numpages = {15},
url = {http://ceur-ws.org/Vol-3180/paper-06.pdf},
urldate = {2022-08-12},
}
```
| null |
Non_BioNLP
|
# MathBERTa model
Pretrained model on English language and LaTeX using a masked language modeling
(MLM) objective. It was introduced in [this paper][1] and first released in
[this repository][2]. This model is case-sensitive: it makes a difference
between english and English.
[1]: http://ceur-ws.org/Vol-3180/paper-06.pdf
[2]: https://github.com/witiko/scm-at-arqmath3
## Model description
MathBERTa is [the RoBERTa base transformer model][3] whose [tokenizer has been
extended with LaTeX math symbols][7] and which has been [fine-tuned on a large
corpus of English mathematical texts][8].
Like RoBERTa, MathBERTa has been fine-tuned with the Masked language modeling
(MLM) objective. Taking a sentence, the model randomly masks 15% of the words
and math symbols in the input then run the entire masked sentence through the
model and has to predict the masked words and symbols. This way, the model
learns an inner representation of the English language and LaTeX that can then
be used to extract features useful for downstream tasks.
[3]: https://huggingface.co/roberta-base
[7]: https://github.com/Witiko/scm-at-arqmath3/blob/main/02-train-tokenizers.ipynb
[8]: https://github.com/witiko/scm-at-arqmath3/blob/main/03-finetune-roberta.ipynb
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly
intended to be fine-tuned on a downstream task. See the [model
hub][4] to look for fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use
the whole sentence (potentially masked) to make decisions, such as sequence
classification, token classification or question answering. For tasks such as
text generation you should look at model like GPT2.
[4]: https://huggingface.co/models?filter=roberta
### How to use
*Due to the large number of added LaTeX tokens, MathBERTa is affected by [a
software bug in the 🤗 Transformers library][9] that causes it to load for tens
of minutes. The bug was [fixed in version 4.20.0][10].*
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='witiko/mathberta')
>>> unmasker(r"If [MATH] \theta = \pi [/MATH] , then [MATH] \sin(\theta) [/MATH] is <mask>.")
[{'sequence': ' If \\theta = \\pi, then\\sin(\\theta ) is zero.'
'score': 0.23291291296482086,
'token': 4276,
'token_str': ' zero'},
{'sequence': ' If \\theta = \\pi, then\\sin(\\theta ) is 0.'
'score': 0.11734672635793686,
'token': 321,
'token_str': ' 0'},
{'sequence': ' If \\theta = \\pi, then\\sin(\\theta ) is real.'
'score': 0.0793389230966568,
'token': 588,
'token_str': ' real'},
{'sequence': ' If \\theta = \\pi, then\\sin(\\theta ) is 1.'
'score': 0.0753420740365982,
'token': 112,
'token_str': ' 1'},
{'sequence': ' If \\theta = \\pi, then\\sin(\\theta ) is even.'
'score': 0.06487451493740082,
'token': 190,
'token_str': ' even'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('witiko/mathberta')
model = AutoModel.from_pretrained('witiko/mathberta')
text = r"Replace me by any text and [MATH] \text{math} [/MATH] you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Training data
Our model was fine-tuned on two datasets:
- [ArXMLiv 2020][5], a dataset consisting of 1,581,037 ArXiv documents.
- [Math StackExchange][6], a dataset of 2,466,080 questions and answers.
Together theses datasets weight 52GB of text and LaTeX.
## Intrinsic evaluation results
Our model achieves the following intrinsic evaluation results:
![Intrinsic evaluation results of MathBERTa][11]
[5]: https://sigmathling.kwarc.info/resources/arxmliv-dataset-2020/
[6]: https://www.cs.rit.edu/~dprl/ARQMath/arqmath-resources.html
[9]: https://github.com/huggingface/transformers/issues/16936
[10]: https://github.com/huggingface/transformers/pull/17119
[11]: https://huggingface.co/witiko/mathberta/resolve/main/learning-curves.png
## Citing
### Text
Vít Novotný and Michal Štefánik. “Combining Sparse and Dense Information
Retrieval. Soft Vector Space Model and MathBERTa at ARQMath-3”.
In: *Proceedings of the Working Notes of CLEF 2022*. To Appear.
CEUR-WS, 2022.
### Bib(La)TeX
``` bib
@inproceedings{novotny2022combining,
booktitle = {Proceedings of the Working Notes of {CLEF} 2022},
editor = {Faggioli, Guglielmo and Ferro, Nicola and Hanbury, Allan and Potthast, Martin},
issn = {1613-0073},
title = {Combining Sparse and Dense Information Retrieval},
subtitle = {Soft Vector Space Model and MathBERTa at ARQMath-3 Task 1 (Answer Retrieval)},
author = {Novotný, Vít and Štefánik, Michal},
publisher = {{CEUR-WS}},
year = {2022},
pages = {104-118},
numpages = {15},
url = {http://ceur-ws.org/Vol-3180/paper-06.pdf},
urldate = {2022-08-12},
}
```
|
{"datasets": ["arxmliv", "math-stackexchange"], "language": "en", "license": "mit"}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,511 |
Cheng98/bert-base-rte
|
Cheng98
|
text-classification
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-04-06T09:34:24Z |
2024-04-08T21:05:54+00:00
| 6 | 0 |
---
base_model: bert-base-cased
datasets:
- glue
language:
- en
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: bert-base-rte
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE RTE
type: glue
args: rte
metrics:
- type: accuracy
value: 0.6714801444043321
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-rte
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1771
- Accuracy: 0.6715
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.18.0
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-rte
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1771
- Accuracy: 0.6715
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.18.0
- Tokenizers 0.13.3
|
{"base_model": "bert-base-cased", "datasets": ["glue"], "language": ["en"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-base-rte", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE RTE", "type": "glue", "args": "rte"}, "metrics": [{"type": "accuracy", "value": 0.6714801444043321, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,512 |
tomaarsen/bert-base-nq-prompts-exclude-pooling-prompts
|
tomaarsen
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:100231",
"loss:CachedMultipleNegativesRankingLoss",
"en",
"dataset:sentence-transformers/natural-questions",
"arxiv:1908.10084",
"arxiv:2101.06983",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-11-07T14:23:05Z |
2024-11-07T16:01:47+00:00
| 12 | 0 |
---
base_model: google-bert/bert-base-uncased
datasets:
- sentence-transformers/natural-questions
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:100231
- loss:CachedMultipleNegativesRankingLoss
widget:
- source_sentence: 'query: who ordered the charge of the light brigade'
sentences:
- 'document: Charge of the Light Brigade The Charge of the Light Brigade was a charge
of British light cavalry led by Lord Cardigan against Russian forces during the
Battle of Balaclava on 25 October 1854 in the Crimean War. Lord Raglan, overall
commander of the British forces, had intended to send the Light Brigade to prevent
the Russians from removing captured guns from overrun Turkish positions, a task
well-suited to light cavalry.'
- 'document: UNICEF The United Nations International Children''s Emergency Fund
was created by the United Nations General Assembly on 11 December 1946, to provide
emergency food and healthcare to children in countries that had been devastated
by World War II. The Polish physician Ludwik Rajchman is widely regarded as the
founder of UNICEF and served as its first chairman from 1946. On Rajchman''s suggestion,
the American Maurice Pate was appointed its first executive director, serving
from 1947 until his death in 1965.[5][6] In 1950, UNICEF''s mandate was extended
to address the long-term needs of children and women in developing countries everywhere.
In 1953 it became a permanent part of the United Nations System, and the words
"international" and "emergency" were dropped from the organization''s name, making
it simply the United Nations Children''s Fund, retaining the original acronym,
"UNICEF".[3]'
- 'document: Marcus Jordan Marcus James Jordan (born December 24, 1990) is an American
former college basketball player who played for the UCF Knights men''s basketball
team of Conference USA.[1] He is the son of retired Hall of Fame basketball player
Michael Jordan.'
- source_sentence: 'query: what part of the cow is the rib roast'
sentences:
- 'document: Standing rib roast A standing rib roast, also known as prime rib, is
a cut of beef from the primal rib, one of the nine primal cuts of beef. While
the entire rib section comprises ribs six through 12, a standing rib roast may
contain anywhere from two to seven ribs.'
- 'document: Blaine Anderson Kurt begins to mend their relationship in "Thanksgiving",
just before New Directions loses at Sectionals to the Warblers, and they spend
Christmas together in New York City.[29][30] Though he and Kurt continue to be
on good terms, Blaine finds himself developing a crush on his best friend, Sam,
which he knows will come to nothing as he knows Sam is not gay; the two of them
team up to find evidence that the Warblers cheated at Sectionals, which means
New Directions will be competing at Regionals. He ends up going to the Sadie Hawkins
dance with Tina Cohen-Chang (Jenna Ushkowitz), who has developed a crush on him,
but as friends only.[31] When Kurt comes to Lima for the wedding of glee club
director Will (Matthew Morrison) and Emma (Jayma Mays)—which Emma flees—he and
Blaine make out beforehand, and sleep together afterward, though they do not resume
a permanent relationship.[32]'
- 'document: Soviet Union The Soviet Union (Russian: Сове́тский Сою́з, tr. Sovétsky
Soyúz, IPA: [sɐˈvʲɛt͡skʲɪj sɐˈjus] ( listen)), officially the Union of Soviet
Socialist Republics (Russian: Сою́з Сове́тских Социалисти́ческих Респу́блик, tr.
Soyúz Sovétskikh Sotsialistícheskikh Respúblik, IPA: [sɐˈjus sɐˈvʲɛtskʲɪx sətsɨəlʲɪsˈtʲitɕɪskʲɪx
rʲɪˈspublʲɪk] ( listen)), abbreviated as the USSR (Russian: СССР, tr. SSSR), was
a socialist state in Eurasia that existed from 1922 to 1991. Nominally a union
of multiple national Soviet republics,[a] its government and economy were highly
centralized. The country was a one-party state, governed by the Communist Party
with Moscow as its capital in its largest republic, the Russian Soviet Federative
Socialist Republic. The Russian nation had constitutionally equal status among
the many nations of the union but exerted de facto dominance in various respects.[7]
Other major urban centres were Leningrad, Kiev, Minsk, Alma-Ata and Novosibirsk.
The Soviet Union was one of the five recognized nuclear weapons states and possessed
the largest stockpile of weapons of mass destruction.[8] It was a founding permanent
member of the United Nations Security Council, as well as a member of the Organization
for Security and Co-operation in Europe (OSCE) and the leading member of the Council
for Mutual Economic Assistance (CMEA) and the Warsaw Pact.'
- source_sentence: 'query: what is the current big bang theory season'
sentences:
- 'document: Byzantine army From the seventh to the 12th centuries, the Byzantine
army was among the most powerful and effective military forces in the world –
neither Middle Ages Europe nor (following its early successes) the fracturing
Caliphate could match the strategies and the efficiency of the Byzantine army.
Restricted to a largely defensive role in the 7th to mid-9th centuries, the Byzantines
developed the theme-system to counter the more powerful Caliphate. From the mid-9th
century, however, they gradually went on the offensive, culminating in the great
conquests of the 10th century under a series of soldier-emperors such as Nikephoros
II Phokas, John Tzimiskes and Basil II. The army they led was less reliant on
the militia of the themes; it was by now a largely professional force, with a
strong and well-drilled infantry at its core and augmented by a revived heavy
cavalry arm. With one of the most powerful economies in the world at the time,
the Empire had the resources to put to the field a powerful host when needed,
in order to reclaim its long-lost territories.'
- 'document: The Big Bang Theory The Big Bang Theory is an American television sitcom
created by Chuck Lorre and Bill Prady, both of whom serve as executive producers
on the series, along with Steven Molaro. All three also serve as head writers.
The show premiered on CBS on September 24, 2007.[3] The series'' tenth season
premiered on September 19, 2016.[4] In March 2017, the series was renewed for
two additional seasons, bringing its total to twelve, and running through the
2018–19 television season. The eleventh season is set to premiere on September
25, 2017.[5]'
- 'document: 2016 NCAA Division I Softball Tournament The 2016 NCAA Division I Softball
Tournament was held from May 20 through June 8, 2016 as the final part of the
2016 NCAA Division I softball season. The 64 NCAA Division I college softball
teams were to be selected out of an eligible 293 teams on May 15, 2016. Thirty-two
teams were awarded an automatic bid as champions of their conference, and thirty-two
teams were selected at-large by the NCAA Division I softball selection committee.
The tournament culminated with eight teams playing in the 2016 Women''s College
World Series at ASA Hall of Fame Stadium in Oklahoma City in which the Oklahoma
Sooners were crowned the champions.'
- source_sentence: 'query: what happened to tates mom on days of our lives'
sentences:
- 'document: Paige O''Hara Donna Paige Helmintoller, better known as Paige O''Hara
(born May 10, 1956),[1] is an American actress, voice actress, singer and painter.
O''Hara began her career as a Broadway actress in 1983 when she portrayed Ellie
May Chipley in the musical Showboat. In 1991, she made her motion picture debut
in Disney''s Beauty and the Beast, in which she voiced the film''s heroine, Belle.
Following the critical and commercial success of Beauty and the Beast, O''Hara
reprised her role as Belle in the film''s two direct-to-video follow-ups, Beauty
and the Beast: The Enchanted Christmas and Belle''s Magical World.'
- 'document: M. Shadows Matthew Charles Sanders (born July 31, 1981), better known
as M. Shadows, is an American singer, songwriter, and musician. He is best known
as the lead vocalist, songwriter, and a founding member of the American heavy
metal band Avenged Sevenfold. In 2017, he was voted 3rd in the list of Top 25
Greatest Modern Frontmen by Ultimate Guitar.[1]'
- 'document: Theresa Donovan In July 2013, Jeannie returns to Salem, this time going
by her middle name, Theresa. Initially, she strikes up a connection with resident
bad boy JJ Deveraux (Casey Moss) while trying to secure some pot.[28] During a
confrontation with JJ and his mother Jennifer Horton (Melissa Reeves) in her office,
her aunt Kayla confirms that Theresa is in fact Jeannie and that Jen promised
to hire her as her assistant, a promise she reluctantly agrees to. Kayla reminds
Theresa it is her last chance at a fresh start.[29] Theresa also strikes up a
bad first impression with Jennifer''s daughter Abigail Deveraux (Kate Mansi) when
Abigail smells pot on Theresa in her mother''s office.[30] To continue to battle
against Jennifer, she teams up with Anne Milbauer (Meredith Scott Lynn) in hopes
of exacting her perfect revenge. In a ploy, Theresa reveals her intentions to
hopefully woo Dr. Daniel Jonas (Shawn Christian). After sleeping with JJ, Theresa
overdoses on marijuana and GHB. Upon hearing of their daughter''s overdose and
continuing problems, Shane and Kimberly return to town in the hopes of handling
their daughter''s problem, together. After believing that Theresa has a handle
on her addictions, Shane and Kimberly leave town together. Theresa then teams
up with hospital co-worker Anne Milbauer (Meredith Scott Lynn) to conspire against
Jennifer, using Daniel as a way to hurt their relationship. In early 2014, following
a Narcotics Anonymous (NA) meeting, she begins a sexual and drugged-fused relationship
with Brady Black (Eric Martsolf). In 2015, after it is found that Kristen DiMera
(Eileen Davidson) stole Theresa''s embryo and carried it to term, Brady and Melanie
Jonas return her son, Christopher, to her and Brady, and the pair rename him Tate.
When Theresa moves into the Kiriakis mansion, tensions arise between her and Victor.
She eventually expresses her interest in purchasing Basic Black and running it
as her own fashion company, with financial backing from Maggie Horton (Suzanne
Rogers). In the hopes of finding the right partner, she teams up with Kate Roberts
(Lauren Koslow) and Nicole Walker (Arianne Zucker) to achieve the goal of purchasing
Basic Black, with Kate and Nicole''s business background and her own interest
in fashion design. As she and Brady share several instances of rekindling their
romance, she is kicked out of the mansion by Victor; as a result, Brady quits
Titan and moves in with Theresa and Tate, in their own penthouse.'
- source_sentence: 'query: where does the last name francisco come from'
sentences:
- 'document: Francisco Francisco is the Spanish and Portuguese form of the masculine
given name Franciscus (corresponding to English Francis).'
- 'document: Book of Esther The Book of Esther, also known in Hebrew as "the Scroll"
(Megillah), is a book in the third section (Ketuvim, "Writings") of the Jewish
Tanakh (the Hebrew Bible) and in the Christian Old Testament. It is one of the
five Scrolls (Megillot) in the Hebrew Bible. It relates the story of a Hebrew
woman in Persia, born as Hadassah but known as Esther, who becomes queen of Persia
and thwarts a genocide of her people. The story forms the core of the Jewish festival
of Purim, during which it is read aloud twice: once in the evening and again the
following morning. The books of Esther and Song of Songs are the only books in
the Hebrew Bible that do not explicitly mention God.[2]'
- 'document: Times Square Times Square is a major commercial intersection, tourist
destination, entertainment center and neighborhood in the Midtown Manhattan section
of New York City at the junction of Broadway and Seventh Avenue. It stretches
from West 42nd to West 47th Streets.[1] Brightly adorned with billboards and advertisements,
Times Square is sometimes referred to as "The Crossroads of the World",[2] "The
Center of the Universe",[3] "the heart of The Great White Way",[4][5][6] and the
"heart of the world".[7] One of the world''s busiest pedestrian areas,[8] it is
also the hub of the Broadway Theater District[9] and a major center of the world''s
entertainment industry.[10] Times Square is one of the world''s most visited tourist
attractions, drawing an estimated 50 million visitors annually.[11] Approximately
330,000 people pass through Times Square daily,[12] many of them tourists,[13]
while over 460,000 pedestrians walk through Times Square on its busiest days.[7]'
co2_eq_emissions:
emissions: 103.95223177174714
energy_consumed: 0.2674342601060636
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 0.776
hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: BERT base trained on Natural Questions pairs
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoClimateFEVER
type: NanoClimateFEVER
metrics:
- type: cosine_accuracy@1
value: 0.24
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.36
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.42
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.6
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.24
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.14666666666666667
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.10800000000000001
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.076
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.08833333333333332
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.1733333333333333
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.205
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.31066666666666665
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.23668411144897733
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.32507936507936497
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.18064440317511302
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoDBPedia
type: NanoDBPedia
metrics:
- type: cosine_accuracy@1
value: 0.58
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.74
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.84
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.58
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.46
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.43200000000000005
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.4
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.060187987174836206
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.10977424825151455
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.16707520990044147
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.24597415193723152
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4733134773883028
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6808571428571429
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.33434372400711937
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoFEVER
type: NanoFEVER
metrics:
- type: cosine_accuracy@1
value: 0.52
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.66
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.68
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.76
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.52
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.22
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.14400000000000002
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.63
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.67
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.75
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6250288470609421
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5971904761904763
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5841699073691555
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoFiQA2018
type: NanoFiQA2018
metrics:
- type: cosine_accuracy@1
value: 0.14
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.3
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.4
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.44
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.14
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.11333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.10800000000000001
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.064
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.07933333333333334
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.16352380952380952
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.22846031746031745
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.27512698412698416
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.2070483011862227
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.23955555555555555
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.17184447175268844
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoHotpotQA
type: NanoHotpotQA
metrics:
- type: cosine_accuracy@1
value: 0.54
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.62
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.66
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.78
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.54
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.25333333333333335
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17199999999999996
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10799999999999997
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.27
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.38
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.43
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.54
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4758825161205549
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5948571428571429
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.403633154924419
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoMSMARCO
type: NanoMSMARCO
metrics:
- type: cosine_accuracy@1
value: 0.2
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.4
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.48
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.62
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.2
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.13333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.096
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.06200000000000001
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.2
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.4
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.48
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.62
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.3929333444965005
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.3225793650793651
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.3345903944684922
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoNFCorpus
type: NanoNFCorpus
metrics:
- type: cosine_accuracy@1
value: 0.32
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.46
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.52
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.58
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.32
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.28
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.22800000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.16999999999999996
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.023393732410294653
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.04028202721825723
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.05292320850853196
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.06512766188420571
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.21330057691798984
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.40985714285714286
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.07333772175450959
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoNQ
type: NanoNQ
metrics:
- type: cosine_accuracy@1
value: 0.42
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.58
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.64
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.42
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.19333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.128
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.4
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.56
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.62
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.67
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5390417243338262
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5118333333333334
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5014983526115104
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoQuoraRetrieval
type: NanoQuoraRetrieval
metrics:
- type: cosine_accuracy@1
value: 0.68
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.94
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.94
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.68
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3533333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.23599999999999993
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.12599999999999997
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6106666666666666
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8486666666666668
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9093333333333333
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9266666666666667
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8205618979026005
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7846666666666667
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.786847374847375
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoSCIDOCS
type: NanoSCIDOCS
metrics:
- type: cosine_accuracy@1
value: 0.26
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.46
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.46
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.58
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.26
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17600000000000002
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.132
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.05566666666666667
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.14466666666666667
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.1806666666666667
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.27066666666666667
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.2517704665914677
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.36450000000000005
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.20084375671559634
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoArguAna
type: NanoArguAna
metrics:
- type: cosine_accuracy@1
value: 0.14
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.58
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.76
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.14
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.16666666666666663
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.11600000000000002
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07600000000000001
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.14
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.58
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.76
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4417985537040473
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.3413253968253968
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.3506916603232609
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoSciFact
type: NanoSciFact
metrics:
- type: cosine_accuracy@1
value: 0.38
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.54
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.56
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.6
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.38
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.19333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.128
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.345
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.51
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.545
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.59
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.48570181290684433
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.46035714285714285
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.4539281050639794
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoTouche2020
type: NanoTouche2020
metrics:
- type: cosine_accuracy@1
value: 0.5510204081632653
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7959183673469388
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9183673469387755
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9795918367346939
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5510204081632653
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.45578231292516996
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.4326530612244897
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.37755102040816335
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.040936400203138934
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.10543098224373823
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.15289328979061165
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.2540307547275961
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4244756661687274
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.689310009718173
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.3161855102539037
name: Cosine Map@100
- task:
type: nano-beir
name: Nano BEIR
dataset:
name: NanoBEIR mean
type: NanoBEIR_mean
metrics:
- type: cosine_accuracy@1
value: 0.38238618524332807
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5627629513343799
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6229513343799058
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7107378335949763
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.38238618524332807
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.24634222919937204
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19266562009419153
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.13935007849293563
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.21642447075294383
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.3512059795310759
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.40164246351230015
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.48294304251353987
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.42981086894053877
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.48630528768283876
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.36096604132824023
name: Cosine Map@100
---
# BERT base trained on Natural Questions pairs
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
This model was trained using the script from the [Training with Prompts](https://sbert.net/examples/training/prompts/README.html) Sentence Transformers documentation.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) <!-- at revision 86b5e0934494bd15c9632b12f734a8a67f723594 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': False})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("tomaarsen/bert-base-nq-prompts-exclude-pooling-prompts")
# Run inference
sentences = [
'query: where does the last name francisco come from',
'document: Francisco Francisco is the Spanish and Portuguese form of the masculine given name Franciscus (corresponding to English Francis).',
'document: Book of Esther The Book of Esther, also known in Hebrew as "the Scroll" (Megillah), is a book in the third section (Ketuvim, "Writings") of the Jewish Tanakh (the Hebrew Bible) and in the Christian Old Testament. It is one of the five Scrolls (Megillot) in the Hebrew Bible. It relates the story of a Hebrew woman in Persia, born as Hadassah but known as Esther, who becomes queen of Persia and thwarts a genocide of her people. The story forms the core of the Jewish festival of Purim, during which it is read aloud twice: once in the evening and again the following morning. The books of Esther and Song of Songs are the only books in the Hebrew Bible that do not explicitly mention God.[2]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `NanoClimateFEVER`, `NanoDBPedia`, `NanoFEVER`, `NanoFiQA2018`, `NanoHotpotQA`, `NanoMSMARCO`, `NanoNFCorpus`, `NanoNQ`, `NanoQuoraRetrieval`, `NanoSCIDOCS`, `NanoArguAna`, `NanoSciFact` and `NanoTouche2020`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | NanoClimateFEVER | NanoDBPedia | NanoFEVER | NanoFiQA2018 | NanoHotpotQA | NanoMSMARCO | NanoNFCorpus | NanoNQ | NanoQuoraRetrieval | NanoSCIDOCS | NanoArguAna | NanoSciFact | NanoTouche2020 |
|:--------------------|:-----------------|:------------|:----------|:-------------|:-------------|:------------|:-------------|:----------|:-------------------|:------------|:------------|:------------|:---------------|
| cosine_accuracy@1 | 0.24 | 0.58 | 0.52 | 0.14 | 0.54 | 0.2 | 0.32 | 0.42 | 0.68 | 0.26 | 0.14 | 0.38 | 0.551 |
| cosine_accuracy@3 | 0.36 | 0.74 | 0.66 | 0.3 | 0.62 | 0.4 | 0.46 | 0.58 | 0.9 | 0.46 | 0.5 | 0.54 | 0.7959 |
| cosine_accuracy@5 | 0.42 | 0.84 | 0.68 | 0.4 | 0.66 | 0.48 | 0.52 | 0.64 | 0.94 | 0.46 | 0.58 | 0.56 | 0.9184 |
| cosine_accuracy@10 | 0.6 | 0.9 | 0.76 | 0.44 | 0.78 | 0.62 | 0.58 | 0.7 | 0.94 | 0.58 | 0.76 | 0.6 | 0.9796 |
| cosine_precision@1 | 0.24 | 0.58 | 0.52 | 0.14 | 0.54 | 0.2 | 0.32 | 0.42 | 0.68 | 0.26 | 0.14 | 0.38 | 0.551 |
| cosine_precision@3 | 0.1467 | 0.46 | 0.22 | 0.1133 | 0.2533 | 0.1333 | 0.28 | 0.1933 | 0.3533 | 0.2333 | 0.1667 | 0.1933 | 0.4558 |
| cosine_precision@5 | 0.108 | 0.432 | 0.144 | 0.108 | 0.172 | 0.096 | 0.228 | 0.128 | 0.236 | 0.176 | 0.116 | 0.128 | 0.4327 |
| cosine_precision@10 | 0.076 | 0.4 | 0.08 | 0.064 | 0.108 | 0.062 | 0.17 | 0.07 | 0.126 | 0.132 | 0.076 | 0.07 | 0.3776 |
| cosine_recall@1 | 0.0883 | 0.0602 | 0.5 | 0.0793 | 0.27 | 0.2 | 0.0234 | 0.4 | 0.6107 | 0.0557 | 0.14 | 0.345 | 0.0409 |
| cosine_recall@3 | 0.1733 | 0.1098 | 0.63 | 0.1635 | 0.38 | 0.4 | 0.0403 | 0.56 | 0.8487 | 0.1447 | 0.5 | 0.51 | 0.1054 |
| cosine_recall@5 | 0.205 | 0.1671 | 0.67 | 0.2285 | 0.43 | 0.48 | 0.0529 | 0.62 | 0.9093 | 0.1807 | 0.58 | 0.545 | 0.1529 |
| cosine_recall@10 | 0.3107 | 0.246 | 0.75 | 0.2751 | 0.54 | 0.62 | 0.0651 | 0.67 | 0.9267 | 0.2707 | 0.76 | 0.59 | 0.254 |
| **cosine_ndcg@10** | **0.2367** | **0.4733** | **0.625** | **0.207** | **0.4759** | **0.3929** | **0.2133** | **0.539** | **0.8206** | **0.2518** | **0.4418** | **0.4857** | **0.4245** |
| cosine_mrr@10 | 0.3251 | 0.6809 | 0.5972 | 0.2396 | 0.5949 | 0.3226 | 0.4099 | 0.5118 | 0.7847 | 0.3645 | 0.3413 | 0.4604 | 0.6893 |
| cosine_map@100 | 0.1806 | 0.3343 | 0.5842 | 0.1718 | 0.4036 | 0.3346 | 0.0733 | 0.5015 | 0.7868 | 0.2008 | 0.3507 | 0.4539 | 0.3162 |
#### Nano BEIR
* Dataset: `NanoBEIR_mean`
* Evaluated with [<code>NanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.NanoBEIREvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.3824 |
| cosine_accuracy@3 | 0.5628 |
| cosine_accuracy@5 | 0.623 |
| cosine_accuracy@10 | 0.7107 |
| cosine_precision@1 | 0.3824 |
| cosine_precision@3 | 0.2463 |
| cosine_precision@5 | 0.1927 |
| cosine_precision@10 | 0.1394 |
| cosine_recall@1 | 0.2164 |
| cosine_recall@3 | 0.3512 |
| cosine_recall@5 | 0.4016 |
| cosine_recall@10 | 0.4829 |
| **cosine_ndcg@10** | **0.4298** |
| cosine_mrr@10 | 0.4863 |
| cosine_map@100 | 0.361 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### natural-questions
* Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
* Size: 100,231 training samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 13.74 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 139.2 tokens</li><li>max: 510 tokens</li></ul> |
* Samples:
| query | answer |
|:-------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>query: who is required to report according to the hmda</code> | <code>document: Home Mortgage Disclosure Act US financial institutions must report HMDA data to their regulator if they meet certain criteria, such as having assets above a specific threshold. The criteria is different for depository and non-depository institutions and are available on the FFIEC website.[4] In 2012, there were 7,400 institutions that reported a total of 18.7 million HMDA records.[5]</code> |
| <code>query: what is the definition of endoplasmic reticulum in biology</code> | <code>document: Endoplasmic reticulum The endoplasmic reticulum (ER) is a type of organelle in eukaryotic cells that forms an interconnected network of flattened, membrane-enclosed sacs or tube-like structures known as cisternae. The membranes of the ER are continuous with the outer nuclear membrane. The endoplasmic reticulum occurs in most types of eukaryotic cells, but is absent from red blood cells and spermatozoa. There are two types of endoplasmic reticulum: rough and smooth. The outer (cytosolic) face of the rough endoplasmic reticulum is studded with ribosomes that are the sites of protein synthesis. The rough endoplasmic reticulum is especially prominent in cells such as hepatocytes. The smooth endoplasmic reticulum lacks ribosomes and functions in lipid manufacture and metabolism, the production of steroid hormones, and detoxification.[1] The smooth ER is especially abundant in mammalian liver and gonad cells. The lacy membranes of the endoplasmic reticulum were first seen in 1945 u...</code> |
| <code>query: what does the ski mean in polish names</code> | <code>document: Polish name Since the High Middle Ages, Polish-sounding surnames ending with the masculine -ski suffix, including -cki and -dzki, and the corresponding feminine suffix -ska/-cka/-dzka were associated with the nobility (Polish szlachta), which alone, in the early years, had such suffix distinctions.[1] They are widely popular today.</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### natural-questions
* Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
* Size: 100,231 evaluation samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 13.78 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 137.63 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| query | answer |
|:-------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>query: difference between russian blue and british blue cat</code> | <code>document: Russian Blue The coat is known as a "double coat", with the undercoat being soft, downy and equal in length to the guard hairs, which are an even blue with silver tips. However, the tail may have a few very dull, almost unnoticeable stripes. The coat is described as thick, plush and soft to the touch. The feeling is softer than the softest silk. The silver tips give the coat a shimmering appearance. Its eyes are almost always a dark and vivid green. Any white patches of fur or yellow eyes in adulthood are seen as flaws in show cats.[3] Russian Blues should not be confused with British Blues (which are not a distinct breed, but rather a British Shorthair with a blue coat as the British Shorthair breed itself comes in a wide variety of colors and patterns), nor the Chartreux or Korat which are two other naturally occurring breeds of blue cats, although they have similar traits.</code> |
| <code>query: who played the little girl on mrs doubtfire</code> | <code>document: Mara Wilson Mara Elizabeth Wilson[2] (born July 24, 1987) is an American writer and former child actress. She is known for playing Natalie Hillard in Mrs. Doubtfire (1993), Susan Walker in Miracle on 34th Street (1994), Matilda Wormwood in Matilda (1996) and Lily Stone in Thomas and the Magic Railroad (2000). Since retiring from film acting, Wilson has focused on writing.</code> |
| <code>query: what year did the movie the sound of music come out</code> | <code>document: The Sound of Music (film) The film was released on March 2, 1965 in the United States, initially as a limited roadshow theatrical release. Although critical response to the film was widely mixed, the film was a major commercial success, becoming the number one box office movie after four weeks, and the highest-grossing film of 1965. By November 1966, The Sound of Music had become the highest-grossing film of all-time—surpassing Gone with the Wind—and held that distinction for five years. The film was just as popular throughout the world, breaking previous box-office records in twenty-nine countries. Following an initial theatrical release that lasted four and a half years, and two successful re-releases, the film sold 283 million admissions worldwide and earned a total worldwide gross of $286,000,000.</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `seed`: 12
- `bf16`: True
- `prompts`: {'query': 'query: ', 'answer': 'document: '}
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 12
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: {'query': 'query: ', 'answer': 'document: '}
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | NanoClimateFEVER_cosine_ndcg@10 | NanoDBPedia_cosine_ndcg@10 | NanoFEVER_cosine_ndcg@10 | NanoFiQA2018_cosine_ndcg@10 | NanoHotpotQA_cosine_ndcg@10 | NanoMSMARCO_cosine_ndcg@10 | NanoNFCorpus_cosine_ndcg@10 | NanoNQ_cosine_ndcg@10 | NanoQuoraRetrieval_cosine_ndcg@10 | NanoSCIDOCS_cosine_ndcg@10 | NanoArguAna_cosine_ndcg@10 | NanoSciFact_cosine_ndcg@10 | NanoTouche2020_cosine_ndcg@10 | NanoBEIR_mean_cosine_ndcg@10 |
|:------:|:----:|:-------------:|:---------------:|:-------------------------------:|:--------------------------:|:------------------------:|:---------------------------:|:---------------------------:|:--------------------------:|:---------------------------:|:---------------------:|:---------------------------------:|:--------------------------:|:--------------------------:|:--------------------------:|:-----------------------------:|:----------------------------:|
| 0 | 0 | - | - | 0.1106 | 0.2356 | 0.1544 | 0.0809 | 0.2551 | 0.2289 | 0.0889 | 0.0875 | 0.7699 | 0.1312 | 0.2403 | 0.1457 | 0.1601 | 0.2068 |
| 0.0026 | 1 | 3.0398 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0129 | 5 | 3.0734 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0258 | 10 | 2.8416 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0387 | 15 | 2.3639 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0515 | 20 | 1.8224 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0644 | 25 | 1.4264 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0773 | 30 | 1.1915 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0902 | 35 | 1.0118 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1031 | 40 | 0.8502 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1160 | 45 | 0.6719 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1289 | 50 | 0.517 | 0.4561 | 0.1696 | 0.4226 | 0.5939 | 0.1618 | 0.4108 | 0.3236 | 0.1649 | 0.4491 | 0.8389 | 0.2458 | 0.4394 | 0.4473 | 0.3660 | 0.3872 |
| 0.1418 | 55 | 0.4655 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1546 | 60 | 0.3677 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1675 | 65 | 0.3677 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1804 | 70 | 0.2745 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1933 | 75 | 0.3488 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2062 | 80 | 0.3043 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2191 | 85 | 0.2866 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2320 | 90 | 0.2697 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2448 | 95 | 0.2543 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2577 | 100 | 0.2702 | 0.2429 | 0.2066 | 0.4474 | 0.6078 | 0.1928 | 0.4406 | 0.3904 | 0.2059 | 0.5030 | 0.8272 | 0.2647 | 0.4627 | 0.4723 | 0.3897 | 0.4162 |
| 0.2706 | 105 | 0.2493 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2835 | 110 | 0.2636 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2964 | 115 | 0.2574 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3093 | 120 | 0.2447 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3222 | 125 | 0.2639 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3351 | 130 | 0.2073 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3479 | 135 | 0.2185 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3608 | 140 | 0.2413 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3737 | 145 | 0.2167 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3866 | 150 | 0.1871 | 0.2020 | 0.2084 | 0.4588 | 0.6261 | 0.1931 | 0.4470 | 0.3937 | 0.2068 | 0.5154 | 0.8236 | 0.2570 | 0.4578 | 0.4640 | 0.3999 | 0.4194 |
| 0.3995 | 155 | 0.2143 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4124 | 160 | 0.2074 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4253 | 165 | 0.1852 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4381 | 170 | 0.1932 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4510 | 175 | 0.1853 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4639 | 180 | 0.1612 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4768 | 185 | 0.1665 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4897 | 190 | 0.2422 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5026 | 195 | 0.1948 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5155 | 200 | 0.2277 | 0.1861 | 0.2178 | 0.4567 | 0.6168 | 0.2158 | 0.4684 | 0.3760 | 0.2088 | 0.5388 | 0.8247 | 0.2632 | 0.4582 | 0.4680 | 0.4249 | 0.4260 |
| 0.5284 | 205 | 0.2216 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5412 | 210 | 0.189 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5541 | 215 | 0.2094 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5670 | 220 | 0.2074 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5799 | 225 | 0.2145 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5928 | 230 | 0.2033 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6057 | 235 | 0.2355 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6186 | 240 | 0.2044 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6314 | 245 | 0.2201 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6443 | 250 | 0.1841 | 0.1760 | 0.2397 | 0.4601 | 0.6282 | 0.2002 | 0.4693 | 0.3899 | 0.2124 | 0.5446 | 0.8262 | 0.2568 | 0.4581 | 0.4835 | 0.4355 | 0.4311 |
| 0.6572 | 255 | 0.2144 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6701 | 260 | 0.2123 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6830 | 265 | 0.1824 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6959 | 270 | 0.1673 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7088 | 275 | 0.1663 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7216 | 280 | 0.1988 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7345 | 285 | 0.1727 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7474 | 290 | 0.1851 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7603 | 295 | 0.2147 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7732 | 300 | 0.1697 | 0.1688 | 0.2342 | 0.4741 | 0.6356 | 0.2060 | 0.4752 | 0.3947 | 0.2153 | 0.5443 | 0.8192 | 0.2547 | 0.4339 | 0.4818 | 0.4310 | 0.4308 |
| 0.7861 | 305 | 0.187 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7990 | 310 | 0.1515 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8119 | 315 | 0.1703 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8247 | 320 | 0.1827 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8376 | 325 | 0.1881 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8505 | 330 | 0.1792 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8634 | 335 | 0.1954 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8763 | 340 | 0.1772 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8892 | 345 | 0.1694 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9021 | 350 | 0.1727 | 0.1622 | 0.2394 | 0.4702 | 0.6247 | 0.2123 | 0.4772 | 0.3884 | 0.2152 | 0.5356 | 0.8199 | 0.2527 | 0.4351 | 0.4853 | 0.4245 | 0.4293 |
| 0.9149 | 355 | 0.1794 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9278 | 360 | 0.1816 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9407 | 365 | 0.1708 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9536 | 370 | 0.202 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9665 | 375 | 0.1854 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9794 | 380 | 0.1958 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9923 | 385 | 0.1561 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.0 | 388 | - | - | 0.2367 | 0.4733 | 0.6250 | 0.2070 | 0.4759 | 0.3929 | 0.2133 | 0.5390 | 0.8206 | 0.2518 | 0.4418 | 0.4857 | 0.4245 | 0.4298 |
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.267 kWh
- **Carbon Emitted**: 0.104 kg of CO2
- **Hours Used**: 0.776 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 3.3.0.dev0
- Transformers: 4.46.2
- PyTorch: 2.5.0+cu121
- Accelerate: 1.0.0
- Datasets: 2.20.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CachedMultipleNegativesRankingLoss
```bibtex
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# BERT base trained on Natural Questions pairs
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
This model was trained using the script from the [Training with Prompts](https://sbert.net/examples/training/prompts/README.html) Sentence Transformers documentation.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) <!-- at revision 86b5e0934494bd15c9632b12f734a8a67f723594 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': False})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("tomaarsen/bert-base-nq-prompts-exclude-pooling-prompts")
# Run inference
sentences = [
'query: where does the last name francisco come from',
'document: Francisco Francisco is the Spanish and Portuguese form of the masculine given name Franciscus (corresponding to English Francis).',
'document: Book of Esther The Book of Esther, also known in Hebrew as "the Scroll" (Megillah), is a book in the third section (Ketuvim, "Writings") of the Jewish Tanakh (the Hebrew Bible) and in the Christian Old Testament. It is one of the five Scrolls (Megillot) in the Hebrew Bible. It relates the story of a Hebrew woman in Persia, born as Hadassah but known as Esther, who becomes queen of Persia and thwarts a genocide of her people. The story forms the core of the Jewish festival of Purim, during which it is read aloud twice: once in the evening and again the following morning. The books of Esther and Song of Songs are the only books in the Hebrew Bible that do not explicitly mention God.[2]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `NanoClimateFEVER`, `NanoDBPedia`, `NanoFEVER`, `NanoFiQA2018`, `NanoHotpotQA`, `NanoMSMARCO`, `NanoNFCorpus`, `NanoNQ`, `NanoQuoraRetrieval`, `NanoSCIDOCS`, `NanoArguAna`, `NanoSciFact` and `NanoTouche2020`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | NanoClimateFEVER | NanoDBPedia | NanoFEVER | NanoFiQA2018 | NanoHotpotQA | NanoMSMARCO | NanoNFCorpus | NanoNQ | NanoQuoraRetrieval | NanoSCIDOCS | NanoArguAna | NanoSciFact | NanoTouche2020 |
|:--------------------|:-----------------|:------------|:----------|:-------------|:-------------|:------------|:-------------|:----------|:-------------------|:------------|:------------|:------------|:---------------|
| cosine_accuracy@1 | 0.24 | 0.58 | 0.52 | 0.14 | 0.54 | 0.2 | 0.32 | 0.42 | 0.68 | 0.26 | 0.14 | 0.38 | 0.551 |
| cosine_accuracy@3 | 0.36 | 0.74 | 0.66 | 0.3 | 0.62 | 0.4 | 0.46 | 0.58 | 0.9 | 0.46 | 0.5 | 0.54 | 0.7959 |
| cosine_accuracy@5 | 0.42 | 0.84 | 0.68 | 0.4 | 0.66 | 0.48 | 0.52 | 0.64 | 0.94 | 0.46 | 0.58 | 0.56 | 0.9184 |
| cosine_accuracy@10 | 0.6 | 0.9 | 0.76 | 0.44 | 0.78 | 0.62 | 0.58 | 0.7 | 0.94 | 0.58 | 0.76 | 0.6 | 0.9796 |
| cosine_precision@1 | 0.24 | 0.58 | 0.52 | 0.14 | 0.54 | 0.2 | 0.32 | 0.42 | 0.68 | 0.26 | 0.14 | 0.38 | 0.551 |
| cosine_precision@3 | 0.1467 | 0.46 | 0.22 | 0.1133 | 0.2533 | 0.1333 | 0.28 | 0.1933 | 0.3533 | 0.2333 | 0.1667 | 0.1933 | 0.4558 |
| cosine_precision@5 | 0.108 | 0.432 | 0.144 | 0.108 | 0.172 | 0.096 | 0.228 | 0.128 | 0.236 | 0.176 | 0.116 | 0.128 | 0.4327 |
| cosine_precision@10 | 0.076 | 0.4 | 0.08 | 0.064 | 0.108 | 0.062 | 0.17 | 0.07 | 0.126 | 0.132 | 0.076 | 0.07 | 0.3776 |
| cosine_recall@1 | 0.0883 | 0.0602 | 0.5 | 0.0793 | 0.27 | 0.2 | 0.0234 | 0.4 | 0.6107 | 0.0557 | 0.14 | 0.345 | 0.0409 |
| cosine_recall@3 | 0.1733 | 0.1098 | 0.63 | 0.1635 | 0.38 | 0.4 | 0.0403 | 0.56 | 0.8487 | 0.1447 | 0.5 | 0.51 | 0.1054 |
| cosine_recall@5 | 0.205 | 0.1671 | 0.67 | 0.2285 | 0.43 | 0.48 | 0.0529 | 0.62 | 0.9093 | 0.1807 | 0.58 | 0.545 | 0.1529 |
| cosine_recall@10 | 0.3107 | 0.246 | 0.75 | 0.2751 | 0.54 | 0.62 | 0.0651 | 0.67 | 0.9267 | 0.2707 | 0.76 | 0.59 | 0.254 |
| **cosine_ndcg@10** | **0.2367** | **0.4733** | **0.625** | **0.207** | **0.4759** | **0.3929** | **0.2133** | **0.539** | **0.8206** | **0.2518** | **0.4418** | **0.4857** | **0.4245** |
| cosine_mrr@10 | 0.3251 | 0.6809 | 0.5972 | 0.2396 | 0.5949 | 0.3226 | 0.4099 | 0.5118 | 0.7847 | 0.3645 | 0.3413 | 0.4604 | 0.6893 |
| cosine_map@100 | 0.1806 | 0.3343 | 0.5842 | 0.1718 | 0.4036 | 0.3346 | 0.0733 | 0.5015 | 0.7868 | 0.2008 | 0.3507 | 0.4539 | 0.3162 |
#### Nano BEIR
* Dataset: `NanoBEIR_mean`
* Evaluated with [<code>NanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.NanoBEIREvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.3824 |
| cosine_accuracy@3 | 0.5628 |
| cosine_accuracy@5 | 0.623 |
| cosine_accuracy@10 | 0.7107 |
| cosine_precision@1 | 0.3824 |
| cosine_precision@3 | 0.2463 |
| cosine_precision@5 | 0.1927 |
| cosine_precision@10 | 0.1394 |
| cosine_recall@1 | 0.2164 |
| cosine_recall@3 | 0.3512 |
| cosine_recall@5 | 0.4016 |
| cosine_recall@10 | 0.4829 |
| **cosine_ndcg@10** | **0.4298** |
| cosine_mrr@10 | 0.4863 |
| cosine_map@100 | 0.361 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### natural-questions
* Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
* Size: 100,231 training samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 13.74 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 139.2 tokens</li><li>max: 510 tokens</li></ul> |
* Samples:
| query | answer |
|:-------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>query: who is required to report according to the hmda</code> | <code>document: Home Mortgage Disclosure Act US financial institutions must report HMDA data to their regulator if they meet certain criteria, such as having assets above a specific threshold. The criteria is different for depository and non-depository institutions and are available on the FFIEC website.[4] In 2012, there were 7,400 institutions that reported a total of 18.7 million HMDA records.[5]</code> |
| <code>query: what is the definition of endoplasmic reticulum in biology</code> | <code>document: Endoplasmic reticulum The endoplasmic reticulum (ER) is a type of organelle in eukaryotic cells that forms an interconnected network of flattened, membrane-enclosed sacs or tube-like structures known as cisternae. The membranes of the ER are continuous with the outer nuclear membrane. The endoplasmic reticulum occurs in most types of eukaryotic cells, but is absent from red blood cells and spermatozoa. There are two types of endoplasmic reticulum: rough and smooth. The outer (cytosolic) face of the rough endoplasmic reticulum is studded with ribosomes that are the sites of protein synthesis. The rough endoplasmic reticulum is especially prominent in cells such as hepatocytes. The smooth endoplasmic reticulum lacks ribosomes and functions in lipid manufacture and metabolism, the production of steroid hormones, and detoxification.[1] The smooth ER is especially abundant in mammalian liver and gonad cells. The lacy membranes of the endoplasmic reticulum were first seen in 1945 u...</code> |
| <code>query: what does the ski mean in polish names</code> | <code>document: Polish name Since the High Middle Ages, Polish-sounding surnames ending with the masculine -ski suffix, including -cki and -dzki, and the corresponding feminine suffix -ska/-cka/-dzka were associated with the nobility (Polish szlachta), which alone, in the early years, had such suffix distinctions.[1] They are widely popular today.</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### natural-questions
* Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
* Size: 100,231 evaluation samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 13.78 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 137.63 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| query | answer |
|:-------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>query: difference between russian blue and british blue cat</code> | <code>document: Russian Blue The coat is known as a "double coat", with the undercoat being soft, downy and equal in length to the guard hairs, which are an even blue with silver tips. However, the tail may have a few very dull, almost unnoticeable stripes. The coat is described as thick, plush and soft to the touch. The feeling is softer than the softest silk. The silver tips give the coat a shimmering appearance. Its eyes are almost always a dark and vivid green. Any white patches of fur or yellow eyes in adulthood are seen as flaws in show cats.[3] Russian Blues should not be confused with British Blues (which are not a distinct breed, but rather a British Shorthair with a blue coat as the British Shorthair breed itself comes in a wide variety of colors and patterns), nor the Chartreux or Korat which are two other naturally occurring breeds of blue cats, although they have similar traits.</code> |
| <code>query: who played the little girl on mrs doubtfire</code> | <code>document: Mara Wilson Mara Elizabeth Wilson[2] (born July 24, 1987) is an American writer and former child actress. She is known for playing Natalie Hillard in Mrs. Doubtfire (1993), Susan Walker in Miracle on 34th Street (1994), Matilda Wormwood in Matilda (1996) and Lily Stone in Thomas and the Magic Railroad (2000). Since retiring from film acting, Wilson has focused on writing.</code> |
| <code>query: what year did the movie the sound of music come out</code> | <code>document: The Sound of Music (film) The film was released on March 2, 1965 in the United States, initially as a limited roadshow theatrical release. Although critical response to the film was widely mixed, the film was a major commercial success, becoming the number one box office movie after four weeks, and the highest-grossing film of 1965. By November 1966, The Sound of Music had become the highest-grossing film of all-time—surpassing Gone with the Wind—and held that distinction for five years. The film was just as popular throughout the world, breaking previous box-office records in twenty-nine countries. Following an initial theatrical release that lasted four and a half years, and two successful re-releases, the film sold 283 million admissions worldwide and earned a total worldwide gross of $286,000,000.</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `seed`: 12
- `bf16`: True
- `prompts`: {'query': 'query: ', 'answer': 'document: '}
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 12
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: {'query': 'query: ', 'answer': 'document: '}
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | NanoClimateFEVER_cosine_ndcg@10 | NanoDBPedia_cosine_ndcg@10 | NanoFEVER_cosine_ndcg@10 | NanoFiQA2018_cosine_ndcg@10 | NanoHotpotQA_cosine_ndcg@10 | NanoMSMARCO_cosine_ndcg@10 | NanoNFCorpus_cosine_ndcg@10 | NanoNQ_cosine_ndcg@10 | NanoQuoraRetrieval_cosine_ndcg@10 | NanoSCIDOCS_cosine_ndcg@10 | NanoArguAna_cosine_ndcg@10 | NanoSciFact_cosine_ndcg@10 | NanoTouche2020_cosine_ndcg@10 | NanoBEIR_mean_cosine_ndcg@10 |
|:------:|:----:|:-------------:|:---------------:|:-------------------------------:|:--------------------------:|:------------------------:|:---------------------------:|:---------------------------:|:--------------------------:|:---------------------------:|:---------------------:|:---------------------------------:|:--------------------------:|:--------------------------:|:--------------------------:|:-----------------------------:|:----------------------------:|
| 0 | 0 | - | - | 0.1106 | 0.2356 | 0.1544 | 0.0809 | 0.2551 | 0.2289 | 0.0889 | 0.0875 | 0.7699 | 0.1312 | 0.2403 | 0.1457 | 0.1601 | 0.2068 |
| 0.0026 | 1 | 3.0398 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0129 | 5 | 3.0734 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0258 | 10 | 2.8416 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0387 | 15 | 2.3639 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0515 | 20 | 1.8224 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0644 | 25 | 1.4264 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0773 | 30 | 1.1915 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0902 | 35 | 1.0118 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1031 | 40 | 0.8502 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1160 | 45 | 0.6719 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1289 | 50 | 0.517 | 0.4561 | 0.1696 | 0.4226 | 0.5939 | 0.1618 | 0.4108 | 0.3236 | 0.1649 | 0.4491 | 0.8389 | 0.2458 | 0.4394 | 0.4473 | 0.3660 | 0.3872 |
| 0.1418 | 55 | 0.4655 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1546 | 60 | 0.3677 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1675 | 65 | 0.3677 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1804 | 70 | 0.2745 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1933 | 75 | 0.3488 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2062 | 80 | 0.3043 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2191 | 85 | 0.2866 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2320 | 90 | 0.2697 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2448 | 95 | 0.2543 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2577 | 100 | 0.2702 | 0.2429 | 0.2066 | 0.4474 | 0.6078 | 0.1928 | 0.4406 | 0.3904 | 0.2059 | 0.5030 | 0.8272 | 0.2647 | 0.4627 | 0.4723 | 0.3897 | 0.4162 |
| 0.2706 | 105 | 0.2493 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2835 | 110 | 0.2636 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2964 | 115 | 0.2574 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3093 | 120 | 0.2447 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3222 | 125 | 0.2639 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3351 | 130 | 0.2073 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3479 | 135 | 0.2185 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3608 | 140 | 0.2413 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3737 | 145 | 0.2167 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3866 | 150 | 0.1871 | 0.2020 | 0.2084 | 0.4588 | 0.6261 | 0.1931 | 0.4470 | 0.3937 | 0.2068 | 0.5154 | 0.8236 | 0.2570 | 0.4578 | 0.4640 | 0.3999 | 0.4194 |
| 0.3995 | 155 | 0.2143 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4124 | 160 | 0.2074 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4253 | 165 | 0.1852 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4381 | 170 | 0.1932 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4510 | 175 | 0.1853 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4639 | 180 | 0.1612 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4768 | 185 | 0.1665 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4897 | 190 | 0.2422 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5026 | 195 | 0.1948 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5155 | 200 | 0.2277 | 0.1861 | 0.2178 | 0.4567 | 0.6168 | 0.2158 | 0.4684 | 0.3760 | 0.2088 | 0.5388 | 0.8247 | 0.2632 | 0.4582 | 0.4680 | 0.4249 | 0.4260 |
| 0.5284 | 205 | 0.2216 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5412 | 210 | 0.189 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5541 | 215 | 0.2094 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5670 | 220 | 0.2074 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5799 | 225 | 0.2145 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5928 | 230 | 0.2033 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6057 | 235 | 0.2355 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6186 | 240 | 0.2044 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6314 | 245 | 0.2201 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6443 | 250 | 0.1841 | 0.1760 | 0.2397 | 0.4601 | 0.6282 | 0.2002 | 0.4693 | 0.3899 | 0.2124 | 0.5446 | 0.8262 | 0.2568 | 0.4581 | 0.4835 | 0.4355 | 0.4311 |
| 0.6572 | 255 | 0.2144 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6701 | 260 | 0.2123 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6830 | 265 | 0.1824 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6959 | 270 | 0.1673 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7088 | 275 | 0.1663 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7216 | 280 | 0.1988 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7345 | 285 | 0.1727 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7474 | 290 | 0.1851 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7603 | 295 | 0.2147 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7732 | 300 | 0.1697 | 0.1688 | 0.2342 | 0.4741 | 0.6356 | 0.2060 | 0.4752 | 0.3947 | 0.2153 | 0.5443 | 0.8192 | 0.2547 | 0.4339 | 0.4818 | 0.4310 | 0.4308 |
| 0.7861 | 305 | 0.187 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7990 | 310 | 0.1515 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8119 | 315 | 0.1703 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8247 | 320 | 0.1827 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8376 | 325 | 0.1881 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8505 | 330 | 0.1792 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8634 | 335 | 0.1954 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8763 | 340 | 0.1772 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8892 | 345 | 0.1694 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9021 | 350 | 0.1727 | 0.1622 | 0.2394 | 0.4702 | 0.6247 | 0.2123 | 0.4772 | 0.3884 | 0.2152 | 0.5356 | 0.8199 | 0.2527 | 0.4351 | 0.4853 | 0.4245 | 0.4293 |
| 0.9149 | 355 | 0.1794 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9278 | 360 | 0.1816 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9407 | 365 | 0.1708 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9536 | 370 | 0.202 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9665 | 375 | 0.1854 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9794 | 380 | 0.1958 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9923 | 385 | 0.1561 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.0 | 388 | - | - | 0.2367 | 0.4733 | 0.6250 | 0.2070 | 0.4759 | 0.3929 | 0.2133 | 0.5390 | 0.8206 | 0.2518 | 0.4418 | 0.4857 | 0.4245 | 0.4298 |
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.267 kWh
- **Carbon Emitted**: 0.104 kg of CO2
- **Hours Used**: 0.776 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 3.3.0.dev0
- Transformers: 4.46.2
- PyTorch: 2.5.0+cu121
- Accelerate: 1.0.0
- Datasets: 2.20.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CachedMultipleNegativesRankingLoss
```bibtex
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "google-bert/bert-base-uncased", "datasets": ["sentence-transformers/natural-questions"], "language": ["en"], "library_name": "sentence-transformers", "license": "apache-2.0", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_recall@1", "cosine_recall@3", "cosine_recall@5", "cosine_recall@10", "cosine_ndcg@10", "cosine_mrr@10", "cosine_map@100"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:100231", "loss:CachedMultipleNegativesRankingLoss"], "widget": [{"source_sentence": "query: who ordered the charge of the light brigade", "sentences": ["document: Charge of the Light Brigade The Charge of the Light Brigade was a charge of British light cavalry led by Lord Cardigan against Russian forces during the Battle of Balaclava on 25 October 1854 in the Crimean War. Lord Raglan, overall commander of the British forces, had intended to send the Light Brigade to prevent the Russians from removing captured guns from overrun Turkish positions, a task well-suited to light cavalry.", "document: UNICEF The United Nations International Children's Emergency Fund was created by the United Nations General Assembly on 11 December 1946, to provide emergency food and healthcare to children in countries that had been devastated by World War II. The Polish physician Ludwik Rajchman is widely regarded as the founder of UNICEF and served as its first chairman from 1946. On Rajchman's suggestion, the American Maurice Pate was appointed its first executive director, serving from 1947 until his death in 1965.[5][6] In 1950, UNICEF's mandate was extended to address the long-term needs of children and women in developing countries everywhere. In 1953 it became a permanent part of the United Nations System, and the words \"international\" and \"emergency\" were dropped from the organization's name, making it simply the United Nations Children's Fund, retaining the original acronym, \"UNICEF\".[3]", "document: Marcus Jordan Marcus James Jordan (born December 24, 1990) is an American former college basketball player who played for the UCF Knights men's basketball team of Conference USA.[1] He is the son of retired Hall of Fame basketball player Michael Jordan."]}, {"source_sentence": "query: what part of the cow is the rib roast", "sentences": ["document: Standing rib roast A standing rib roast, also known as prime rib, is a cut of beef from the primal rib, one of the nine primal cuts of beef. While the entire rib section comprises ribs six through 12, a standing rib roast may contain anywhere from two to seven ribs.", "document: Blaine Anderson Kurt begins to mend their relationship in \"Thanksgiving\", just before New Directions loses at Sectionals to the Warblers, and they spend Christmas together in New York City.[29][30] Though he and Kurt continue to be on good terms, Blaine finds himself developing a crush on his best friend, Sam, which he knows will come to nothing as he knows Sam is not gay; the two of them team up to find evidence that the Warblers cheated at Sectionals, which means New Directions will be competing at Regionals. He ends up going to the Sadie Hawkins dance with Tina Cohen-Chang (Jenna Ushkowitz), who has developed a crush on him, but as friends only.[31] When Kurt comes to Lima for the wedding of glee club director Will (Matthew Morrison) and Emma (Jayma Mays)—which Emma flees—he and Blaine make out beforehand, and sleep together afterward, though they do not resume a permanent relationship.[32]", "document: Soviet Union The Soviet Union (Russian: Сове́тский Сою́з, tr. Sovétsky Soyúz, IPA: [sɐˈvʲɛt͡skʲɪj sɐˈjus] ( listen)), officially the Union of Soviet Socialist Republics (Russian: Сою́з Сове́тских Социалисти́ческих Респу́блик, tr. Soyúz Sovétskikh Sotsialistícheskikh Respúblik, IPA: [sɐˈjus sɐˈvʲɛtskʲɪx sətsɨəlʲɪsˈtʲitɕɪskʲɪx rʲɪˈspublʲɪk] ( listen)), abbreviated as the USSR (Russian: СССР, tr. SSSR), was a socialist state in Eurasia that existed from 1922 to 1991. Nominally a union of multiple national Soviet republics,[a] its government and economy were highly centralized. The country was a one-party state, governed by the Communist Party with Moscow as its capital in its largest republic, the Russian Soviet Federative Socialist Republic. The Russian nation had constitutionally equal status among the many nations of the union but exerted de facto dominance in various respects.[7] Other major urban centres were Leningrad, Kiev, Minsk, Alma-Ata and Novosibirsk. The Soviet Union was one of the five recognized nuclear weapons states and possessed the largest stockpile of weapons of mass destruction.[8] It was a founding permanent member of the United Nations Security Council, as well as a member of the Organization for Security and Co-operation in Europe (OSCE) and the leading member of the Council for Mutual Economic Assistance (CMEA) and the Warsaw Pact."]}, {"source_sentence": "query: what is the current big bang theory season", "sentences": ["document: Byzantine army From the seventh to the 12th centuries, the Byzantine army was among the most powerful and effective military forces in the world – neither Middle Ages Europe nor (following its early successes) the fracturing Caliphate could match the strategies and the efficiency of the Byzantine army. Restricted to a largely defensive role in the 7th to mid-9th centuries, the Byzantines developed the theme-system to counter the more powerful Caliphate. From the mid-9th century, however, they gradually went on the offensive, culminating in the great conquests of the 10th century under a series of soldier-emperors such as Nikephoros II Phokas, John Tzimiskes and Basil II. The army they led was less reliant on the militia of the themes; it was by now a largely professional force, with a strong and well-drilled infantry at its core and augmented by a revived heavy cavalry arm. With one of the most powerful economies in the world at the time, the Empire had the resources to put to the field a powerful host when needed, in order to reclaim its long-lost territories.", "document: The Big Bang Theory The Big Bang Theory is an American television sitcom created by Chuck Lorre and Bill Prady, both of whom serve as executive producers on the series, along with Steven Molaro. All three also serve as head writers. The show premiered on CBS on September 24, 2007.[3] The series' tenth season premiered on September 19, 2016.[4] In March 2017, the series was renewed for two additional seasons, bringing its total to twelve, and running through the 2018–19 television season. The eleventh season is set to premiere on September 25, 2017.[5]", "document: 2016 NCAA Division I Softball Tournament The 2016 NCAA Division I Softball Tournament was held from May 20 through June 8, 2016 as the final part of the 2016 NCAA Division I softball season. The 64 NCAA Division I college softball teams were to be selected out of an eligible 293 teams on May 15, 2016. Thirty-two teams were awarded an automatic bid as champions of their conference, and thirty-two teams were selected at-large by the NCAA Division I softball selection committee. The tournament culminated with eight teams playing in the 2016 Women's College World Series at ASA Hall of Fame Stadium in Oklahoma City in which the Oklahoma Sooners were crowned the champions."]}, {"source_sentence": "query: what happened to tates mom on days of our lives", "sentences": ["document: Paige O'Hara Donna Paige Helmintoller, better known as Paige O'Hara (born May 10, 1956),[1] is an American actress, voice actress, singer and painter. O'Hara began her career as a Broadway actress in 1983 when she portrayed Ellie May Chipley in the musical Showboat. In 1991, she made her motion picture debut in Disney's Beauty and the Beast, in which she voiced the film's heroine, Belle. Following the critical and commercial success of Beauty and the Beast, O'Hara reprised her role as Belle in the film's two direct-to-video follow-ups, Beauty and the Beast: The Enchanted Christmas and Belle's Magical World.", "document: M. Shadows Matthew Charles Sanders (born July 31, 1981), better known as M. Shadows, is an American singer, songwriter, and musician. He is best known as the lead vocalist, songwriter, and a founding member of the American heavy metal band Avenged Sevenfold. In 2017, he was voted 3rd in the list of Top 25 Greatest Modern Frontmen by Ultimate Guitar.[1]", "document: Theresa Donovan In July 2013, Jeannie returns to Salem, this time going by her middle name, Theresa. Initially, she strikes up a connection with resident bad boy JJ Deveraux (Casey Moss) while trying to secure some pot.[28] During a confrontation with JJ and his mother Jennifer Horton (Melissa Reeves) in her office, her aunt Kayla confirms that Theresa is in fact Jeannie and that Jen promised to hire her as her assistant, a promise she reluctantly agrees to. Kayla reminds Theresa it is her last chance at a fresh start.[29] Theresa also strikes up a bad first impression with Jennifer's daughter Abigail Deveraux (Kate Mansi) when Abigail smells pot on Theresa in her mother's office.[30] To continue to battle against Jennifer, she teams up with Anne Milbauer (Meredith Scott Lynn) in hopes of exacting her perfect revenge. In a ploy, Theresa reveals her intentions to hopefully woo Dr. Daniel Jonas (Shawn Christian). After sleeping with JJ, Theresa overdoses on marijuana and GHB. Upon hearing of their daughter's overdose and continuing problems, Shane and Kimberly return to town in the hopes of handling their daughter's problem, together. After believing that Theresa has a handle on her addictions, Shane and Kimberly leave town together. Theresa then teams up with hospital co-worker Anne Milbauer (Meredith Scott Lynn) to conspire against Jennifer, using Daniel as a way to hurt their relationship. In early 2014, following a Narcotics Anonymous (NA) meeting, she begins a sexual and drugged-fused relationship with Brady Black (Eric Martsolf). In 2015, after it is found that Kristen DiMera (Eileen Davidson) stole Theresa's embryo and carried it to term, Brady and Melanie Jonas return her son, Christopher, to her and Brady, and the pair rename him Tate. When Theresa moves into the Kiriakis mansion, tensions arise between her and Victor. She eventually expresses her interest in purchasing Basic Black and running it as her own fashion company, with financial backing from Maggie Horton (Suzanne Rogers). In the hopes of finding the right partner, she teams up with Kate Roberts (Lauren Koslow) and Nicole Walker (Arianne Zucker) to achieve the goal of purchasing Basic Black, with Kate and Nicole's business background and her own interest in fashion design. As she and Brady share several instances of rekindling their romance, she is kicked out of the mansion by Victor; as a result, Brady quits Titan and moves in with Theresa and Tate, in their own penthouse."]}, {"source_sentence": "query: where does the last name francisco come from", "sentences": ["document: Francisco Francisco is the Spanish and Portuguese form of the masculine given name Franciscus (corresponding to English Francis).", "document: Book of Esther The Book of Esther, also known in Hebrew as \"the Scroll\" (Megillah), is a book in the third section (Ketuvim, \"Writings\") of the Jewish Tanakh (the Hebrew Bible) and in the Christian Old Testament. It is one of the five Scrolls (Megillot) in the Hebrew Bible. It relates the story of a Hebrew woman in Persia, born as Hadassah but known as Esther, who becomes queen of Persia and thwarts a genocide of her people. The story forms the core of the Jewish festival of Purim, during which it is read aloud twice: once in the evening and again the following morning. The books of Esther and Song of Songs are the only books in the Hebrew Bible that do not explicitly mention God.[2]", "document: Times Square Times Square is a major commercial intersection, tourist destination, entertainment center and neighborhood in the Midtown Manhattan section of New York City at the junction of Broadway and Seventh Avenue. It stretches from West 42nd to West 47th Streets.[1] Brightly adorned with billboards and advertisements, Times Square is sometimes referred to as \"The Crossroads of the World\",[2] \"The Center of the Universe\",[3] \"the heart of The Great White Way\",[4][5][6] and the \"heart of the world\".[7] One of the world's busiest pedestrian areas,[8] it is also the hub of the Broadway Theater District[9] and a major center of the world's entertainment industry.[10] Times Square is one of the world's most visited tourist attractions, drawing an estimated 50 million visitors annually.[11] Approximately 330,000 people pass through Times Square daily,[12] many of them tourists,[13] while over 460,000 pedestrians walk through Times Square on its busiest days.[7]"]}], "co2_eq_emissions": {"emissions": 103.95223177174714, "energy_consumed": 0.2674342601060636, "source": "codecarbon", "training_type": "fine-tuning", "on_cloud": false, "cpu_model": "13th Gen Intel(R) Core(TM) i7-13700K", "ram_total_size": 31.777088165283203, "hours_used": 0.776, "hardware_used": "1 x NVIDIA GeForce RTX 3090"}, "model-index": [{"name": "BERT base trained on Natural Questions pairs", "results": [{"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "NanoClimateFEVER", "type": "NanoClimateFEVER"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.24, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.36, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.42, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.6, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.24, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.14666666666666667, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.10800000000000001, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.076, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.08833333333333332, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.1733333333333333, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.205, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.31066666666666665, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.23668411144897733, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.32507936507936497, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.18064440317511302, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "NanoDBPedia", "type": "NanoDBPedia"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.58, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.74, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.84, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.9, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.58, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.46, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.43200000000000005, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.4, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.060187987174836206, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.10977424825151455, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.16707520990044147, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.24597415193723152, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.4733134773883028, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.6808571428571429, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.33434372400711937, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "NanoFEVER", "type": "NanoFEVER"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.52, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.66, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.68, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.76, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.52, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.22, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.14400000000000002, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.08, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.5, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.63, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.67, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.75, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.6250288470609421, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.5971904761904763, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.5841699073691555, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "NanoFiQA2018", "type": "NanoFiQA2018"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.14, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.3, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.4, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.44, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.14, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.11333333333333333, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.10800000000000001, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.064, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.07933333333333334, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.16352380952380952, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.22846031746031745, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.27512698412698416, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.2070483011862227, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.23955555555555555, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.17184447175268844, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "NanoHotpotQA", "type": "NanoHotpotQA"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.54, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.62, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.66, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.78, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.54, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.25333333333333335, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.17199999999999996, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.10799999999999997, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.27, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.38, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.43, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.54, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.4758825161205549, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.5948571428571429, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.403633154924419, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "NanoMSMARCO", "type": "NanoMSMARCO"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.2, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.4, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.48, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.62, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.2, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.13333333333333333, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.096, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.06200000000000001, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.2, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.4, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.48, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.62, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.3929333444965005, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.3225793650793651, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.3345903944684922, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "NanoNFCorpus", "type": "NanoNFCorpus"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.32, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.46, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.52, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.58, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.32, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.28, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.22800000000000004, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.16999999999999996, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.023393732410294653, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.04028202721825723, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.05292320850853196, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.06512766188420571, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.21330057691798984, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.40985714285714286, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.07333772175450959, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "NanoNQ", "type": "NanoNQ"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.42, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.58, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.64, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.7, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.42, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.19333333333333333, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.128, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.07, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.4, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.56, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.62, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.67, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.5390417243338262, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.5118333333333334, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.5014983526115104, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "NanoQuoraRetrieval", "type": "NanoQuoraRetrieval"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.68, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.9, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.94, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.94, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.68, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.3533333333333333, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.23599999999999993, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.12599999999999997, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.6106666666666666, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.8486666666666668, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.9093333333333333, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.9266666666666667, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.8205618979026005, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.7846666666666667, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.786847374847375, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "NanoSCIDOCS", "type": "NanoSCIDOCS"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.26, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.46, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.46, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.58, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.26, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.2333333333333333, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.17600000000000002, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.132, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.05566666666666667, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.14466666666666667, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.1806666666666667, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.27066666666666667, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.2517704665914677, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.36450000000000005, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.20084375671559634, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "NanoArguAna", "type": "NanoArguAna"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.14, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.5, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.58, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.76, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.14, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.16666666666666663, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.11600000000000002, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.07600000000000001, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.14, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.5, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.58, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.76, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.4417985537040473, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.3413253968253968, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.3506916603232609, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "NanoSciFact", "type": "NanoSciFact"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.38, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.54, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.56, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.6, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.38, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.19333333333333333, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.128, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.07, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.345, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.51, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.545, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.59, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.48570181290684433, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.46035714285714285, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.4539281050639794, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "NanoTouche2020", "type": "NanoTouche2020"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.5510204081632653, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.7959183673469388, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.9183673469387755, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.9795918367346939, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.5510204081632653, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.45578231292516996, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.4326530612244897, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.37755102040816335, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.040936400203138934, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.10543098224373823, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.15289328979061165, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.2540307547275961, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.4244756661687274, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.689310009718173, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.3161855102539037, "name": "Cosine Map@100"}]}, {"task": {"type": "nano-beir", "name": "Nano BEIR"}, "dataset": {"name": "NanoBEIR mean", "type": "NanoBEIR_mean"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.38238618524332807, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.5627629513343799, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.6229513343799058, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.7107378335949763, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.38238618524332807, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.24634222919937204, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.19266562009419153, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.13935007849293563, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.21642447075294383, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.3512059795310759, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.40164246351230015, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.48294304251353987, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.42981086894053877, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.48630528768283876, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.36096604132824023, "name": "Cosine Map@100"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,513 |
yahyaabd/allstats-semantic-mpnet
|
yahyaabd
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:88250",
"loss:CosineSimilarityLoss",
"dataset:yahyaabd/allstats-semantic-dataset-v4",
"arxiv:1908.10084",
"base_model:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2025-01-14T13:49:07Z |
2025-01-14T13:49:42+00:00
| 5 | 0 |
---
base_model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2
datasets:
- yahyaabd/allstats-semantic-dataset-v4
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:88250
- loss:CosineSimilarityLoss
widget:
- source_sentence: Laporan ekspor Indonesia Juli 2020
sentences:
- Statistik Produksi Kehutanan 2021
- Buletin Statistik Perdagangan Luar Negeri Ekspor Menurut HS, Juli 2020
- Statistik Politik 2017
- source_sentence: Bulan apa yang dicatat data kunjungan wisatawan mancanegara?
sentences:
- Indeks Tendensi Bisnis dan Indeks Tendensi Konsumen 2005
- Data NTP bulan Maret 2022.
- Kunjungan wisatawan mancanegara pada Oktober 2023 mencapai 978,50 ribu kunjungan,
naik 33,27 persen (year-on-year)
- source_sentence: Seberapa besar kenaikan upah nominal harian buruh tani nasional
Januari 2016?
sentences:
- Keadaan Angkatan Kerja di Indonesia Mei 2013
- Profil Pasar Gorontalo 2020
- Tingkat pengangguran terbuka (TPT) Agustus 2024 sebesar 5,3%.
- source_sentence: Ringkasan data statistik Indonesia 1997
sentences:
- Statistik Upah 2007
- Harga konsumen bbrp jenis barang kelompok perumahan 2005
- Statistik Indonesia 1997
- source_sentence: Pernikahan usia anak di Indonesia periode 2013-2015
sentences:
- Jumlah penduduk Indonesia 2013-2015
- Indikator Ekonomi Desember 2006
- Indeks Tendensi Bisnis dan Indeks Tendensi Konsumen 2013
model-index:
- name: SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: allstats semantic mpnet eval
type: allstats-semantic-mpnet-eval
metrics:
- type: pearson_cosine
value: 0.9714169395957917
name: Pearson Cosine
- type: spearman_cosine
value: 0.8933550959155299
name: Spearman Cosine
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: allstats semantic mpnet test
type: allstats-semantic-mpnet-test
metrics:
- type: pearson_cosine
value: 0.9723087139367028
name: Pearson Cosine
- type: spearman_cosine
value: 0.8932385415736595
name: Spearman Cosine
---
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) on the [allstats-semantic-dataset-v4](https://huggingface.co/datasets/yahyaabd/allstats-semantic-dataset-v4) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) <!-- at revision 75c57757a97f90ad739aca51fa8bfea0e485a7f2 -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [allstats-semantic-dataset-v4](https://huggingface.co/datasets/yahyaabd/allstats-semantic-dataset-v4)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("yahyaabd/allstats-semantic-mpnet")
# Run inference
sentences = [
'Pernikahan usia anak di Indonesia periode 2013-2015',
'Jumlah penduduk Indonesia 2013-2015',
'Indeks Tendensi Bisnis dan Indeks Tendensi Konsumen 2013',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Datasets: `allstats-semantic-mpnet-eval` and `allstats-semantic-mpnet-test`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | allstats-semantic-mpnet-eval | allstats-semantic-mpnet-test |
|:--------------------|:-----------------------------|:-----------------------------|
| pearson_cosine | 0.9714 | 0.9723 |
| **spearman_cosine** | **0.8934** | **0.8932** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### allstats-semantic-dataset-v4
* Dataset: [allstats-semantic-dataset-v4](https://huggingface.co/datasets/yahyaabd/allstats-semantic-dataset-v4) at [06c3cf8](https://huggingface.co/datasets/yahyaabd/allstats-semantic-dataset-v4/tree/06c3cf8715472fba6be04302a12790a6bd80443a)
* Size: 88,250 training samples
* Columns: <code>query</code>, <code>doc</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | query | doc | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 11.38 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 14.48 tokens</li><li>max: 67 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.51</li><li>max: 1.0</li></ul> |
* Samples:
| query | doc | label |
|:-----------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>Industri teh Indonesia tahun 2021</code> | <code>Statistik Transportasi Laut 2014</code> | <code>0.1</code> |
| <code>Tahun berapa data pertumbuhan ekonomi Indonesia tersebut?</code> | <code>Nilai Tukar Petani (NTP) November 2023 sebesar 116,73 atau naik 0,82 persen. Harga Gabah Kering Panen di Tingkat Petani turun 1,94 persen dan Harga Beras Premium di Penggilingan turun 0,91 persen.</code> | <code>0.0</code> |
| <code>Kemiskinan di Indonesia Maret</code> | <code>2018 Feb Tenaga Kerja</code> | <code>0.1</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Evaluation Dataset
#### allstats-semantic-dataset-v4
* Dataset: [allstats-semantic-dataset-v4](https://huggingface.co/datasets/yahyaabd/allstats-semantic-dataset-v4) at [06c3cf8](https://huggingface.co/datasets/yahyaabd/allstats-semantic-dataset-v4/tree/06c3cf8715472fba6be04302a12790a6bd80443a)
* Size: 18,910 evaluation samples
* Columns: <code>query</code>, <code>doc</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | query | doc | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 5 tokens</li><li>mean: 11.35 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 14.25 tokens</li><li>max: 52 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.49</li><li>max: 1.0</li></ul> |
* Samples:
| query | doc | label |
|:--------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>nAalisis keuangam deas tshun 019</code> | <code>Statistik Migrasi Nusa Tenggara Barat Hasil Survei Penduduk Antar Sensus 2015</code> | <code>0.1</code> |
| <code>Data tanaman buah dan sayur Indonesia tahun 2016</code> | <code>Statistik Penduduk Lanjut Usia 2010</code> | <code>0.1</code> |
| <code>Pasar beras di Indonesia tahun 2018</code> | <code>Buletin Statistik Perdagangan Luar Negeri Ekspor Menurut Kelompok Komoditi dan Negara, April 2021</code> | <code>0.2</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `num_train_epochs`: 8
- `warmup_ratio`: 0.1
- `fp16`: True
- `dataloader_num_workers`: 4
- `load_best_model_at_end`: True
- `label_smoothing_factor`: 0.05
- `eval_on_start`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 8
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 4
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.05
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: True
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | allstats-semantic-mpnet-eval_spearman_cosine | allstats-semantic-mpnet-test_spearman_cosine |
|:----------:|:---------:|:-------------:|:---------------:|:--------------------------------------------:|:--------------------------------------------:|
| 0 | 0 | - | 0.0979 | 0.6119 | - |
| 0.0906 | 250 | 0.0646 | 0.0427 | 0.7249 | - |
| 0.1813 | 500 | 0.039 | 0.0324 | 0.7596 | - |
| 0.2719 | 750 | 0.032 | 0.0271 | 0.7860 | - |
| 0.3626 | 1000 | 0.0276 | 0.0255 | 0.7920 | - |
| 0.4532 | 1250 | 0.0264 | 0.0230 | 0.8072 | - |
| 0.5439 | 1500 | 0.0249 | 0.0222 | 0.8197 | - |
| 0.6345 | 1750 | 0.0226 | 0.0210 | 0.8200 | - |
| 0.7252 | 2000 | 0.0218 | 0.0209 | 0.8202 | - |
| 0.8158 | 2250 | 0.0208 | 0.0201 | 0.8346 | - |
| 0.9065 | 2500 | 0.0209 | 0.0211 | 0.8240 | - |
| 0.9971 | 2750 | 0.0211 | 0.0190 | 0.8170 | - |
| 1.0877 | 3000 | 0.0161 | 0.0182 | 0.8332 | - |
| 1.1784 | 3250 | 0.0158 | 0.0179 | 0.8393 | - |
| 1.2690 | 3500 | 0.0167 | 0.0189 | 0.8341 | - |
| 1.3597 | 3750 | 0.0152 | 0.0168 | 0.8371 | - |
| 1.4503 | 4000 | 0.0151 | 0.0165 | 0.8435 | - |
| 1.5410 | 4250 | 0.0143 | 0.0156 | 0.8365 | - |
| 1.6316 | 4500 | 0.0147 | 0.0157 | 0.8467 | - |
| 1.7223 | 4750 | 0.0138 | 0.0155 | 0.8501 | - |
| 1.8129 | 5000 | 0.0147 | 0.0154 | 0.8457 | - |
| 1.9036 | 5250 | 0.0137 | 0.0152 | 0.8498 | - |
| 1.9942 | 5500 | 0.0144 | 0.0143 | 0.8485 | - |
| 2.0848 | 5750 | 0.0108 | 0.0139 | 0.8439 | - |
| 2.1755 | 6000 | 0.01 | 0.0146 | 0.8563 | - |
| 2.2661 | 6250 | 0.011 | 0.0141 | 0.8558 | - |
| 2.3568 | 6500 | 0.0107 | 0.0144 | 0.8497 | - |
| 2.4474 | 6750 | 0.01 | 0.0138 | 0.8577 | - |
| 2.5381 | 7000 | 0.0097 | 0.0136 | 0.8585 | - |
| 2.6287 | 7250 | 0.0102 | 0.0135 | 0.8521 | - |
| 2.7194 | 7500 | 0.0106 | 0.0133 | 0.8537 | - |
| 2.8100 | 7750 | 0.0098 | 0.0133 | 0.8643 | - |
| 2.9007 | 8000 | 0.0105 | 0.0138 | 0.8543 | - |
| 2.9913 | 8250 | 0.009 | 0.0129 | 0.8555 | - |
| 3.0819 | 8500 | 0.0071 | 0.0121 | 0.8692 | - |
| 3.1726 | 8750 | 0.006 | 0.0120 | 0.8709 | - |
| 3.2632 | 9000 | 0.0078 | 0.0120 | 0.8660 | - |
| 3.3539 | 9250 | 0.0072 | 0.0122 | 0.8656 | - |
| 3.4445 | 9500 | 0.007 | 0.0123 | 0.8696 | - |
| 3.5352 | 9750 | 0.0075 | 0.0117 | 0.8707 | - |
| 3.6258 | 10000 | 0.0081 | 0.0115 | 0.8682 | - |
| 3.7165 | 10250 | 0.0083 | 0.0116 | 0.8617 | - |
| 3.8071 | 10500 | 0.0075 | 0.0116 | 0.8665 | - |
| 3.8978 | 10750 | 0.0077 | 0.0119 | 0.8733 | - |
| 3.9884 | 11000 | 0.008 | 0.0113 | 0.8678 | - |
| 4.0790 | 11250 | 0.0051 | 0.0110 | 0.8760 | - |
| 4.1697 | 11500 | 0.0052 | 0.0108 | 0.8729 | - |
| 4.2603 | 11750 | 0.0056 | 0.0108 | 0.8771 | - |
| 4.3510 | 12000 | 0.0052 | 0.0109 | 0.8793 | - |
| 4.4416 | 12250 | 0.0049 | 0.0109 | 0.8766 | - |
| 4.5323 | 12500 | 0.0055 | 0.0114 | 0.8742 | - |
| 4.6229 | 12750 | 0.0061 | 0.0108 | 0.8749 | - |
| 4.7136 | 13000 | 0.0058 | 0.0109 | 0.8833 | - |
| 4.8042 | 13250 | 0.0049 | 0.0108 | 0.8767 | - |
| 4.8949 | 13500 | 0.0046 | 0.0108 | 0.8839 | - |
| 4.9855 | 13750 | 0.0052 | 0.0104 | 0.8790 | - |
| 5.0761 | 14000 | 0.0041 | 0.0102 | 0.8826 | - |
| 5.1668 | 14250 | 0.004 | 0.0103 | 0.8775 | - |
| 5.2574 | 14500 | 0.0036 | 0.0102 | 0.8855 | - |
| 5.3481 | 14750 | 0.0037 | 0.0104 | 0.8841 | - |
| 5.4387 | 15000 | 0.0036 | 0.0101 | 0.8860 | - |
| 5.5294 | 15250 | 0.0043 | 0.0104 | 0.8852 | - |
| 5.6200 | 15500 | 0.004 | 0.0100 | 0.8856 | - |
| 5.7107 | 15750 | 0.0043 | 0.0101 | 0.8842 | - |
| 5.8013 | 16000 | 0.0043 | 0.0099 | 0.8835 | - |
| 5.8920 | 16250 | 0.0041 | 0.0099 | 0.8852 | - |
| 5.9826 | 16500 | 0.0036 | 0.0101 | 0.8866 | - |
| 6.0732 | 16750 | 0.0031 | 0.0100 | 0.8881 | - |
| 6.1639 | 17000 | 0.0031 | 0.0098 | 0.8880 | - |
| 6.2545 | 17250 | 0.0027 | 0.0098 | 0.8886 | - |
| 6.3452 | 17500 | 0.0032 | 0.0097 | 0.8868 | - |
| 6.4358 | 17750 | 0.0027 | 0.0097 | 0.8876 | - |
| 6.5265 | 18000 | 0.0031 | 0.0097 | 0.8893 | - |
| 6.6171 | 18250 | 0.0032 | 0.0096 | 0.8903 | - |
| 6.7078 | 18500 | 0.003 | 0.0096 | 0.8898 | - |
| 6.7984 | 18750 | 0.0029 | 0.0098 | 0.8907 | - |
| 6.8891 | 19000 | 0.003 | 0.0096 | 0.8896 | - |
| 6.9797 | 19250 | 0.0026 | 0.0096 | 0.8913 | - |
| 7.0703 | 19500 | 0.0024 | 0.0096 | 0.8921 | - |
| 7.1610 | 19750 | 0.0021 | 0.0097 | 0.8920 | - |
| 7.2516 | 20000 | 0.0023 | 0.0096 | 0.8910 | - |
| 7.3423 | 20250 | 0.002 | 0.0096 | 0.8920 | - |
| 7.4329 | 20500 | 0.0022 | 0.0096 | 0.8924 | - |
| 7.5236 | 20750 | 0.002 | 0.0097 | 0.8917 | - |
| 7.6142 | 21000 | 0.0024 | 0.0096 | 0.8923 | - |
| 7.7049 | 21250 | 0.0025 | 0.0095 | 0.8928 | - |
| 7.7955 | 21500 | 0.0022 | 0.0095 | 0.8931 | - |
| 7.8861 | 21750 | 0.0023 | 0.0095 | 0.8932 | - |
| **7.9768** | **22000** | **0.0022** | **0.0095** | **0.8934** | **-** |
| 8.0 | 22064 | - | - | - | 0.8932 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.1
- Transformers: 4.48.0
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) on the [allstats-semantic-dataset-v4](https://huggingface.co/datasets/yahyaabd/allstats-semantic-dataset-v4) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) <!-- at revision 75c57757a97f90ad739aca51fa8bfea0e485a7f2 -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [allstats-semantic-dataset-v4](https://huggingface.co/datasets/yahyaabd/allstats-semantic-dataset-v4)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("yahyaabd/allstats-semantic-mpnet")
# Run inference
sentences = [
'Pernikahan usia anak di Indonesia periode 2013-2015',
'Jumlah penduduk Indonesia 2013-2015',
'Indeks Tendensi Bisnis dan Indeks Tendensi Konsumen 2013',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Datasets: `allstats-semantic-mpnet-eval` and `allstats-semantic-mpnet-test`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | allstats-semantic-mpnet-eval | allstats-semantic-mpnet-test |
|:--------------------|:-----------------------------|:-----------------------------|
| pearson_cosine | 0.9714 | 0.9723 |
| **spearman_cosine** | **0.8934** | **0.8932** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### allstats-semantic-dataset-v4
* Dataset: [allstats-semantic-dataset-v4](https://huggingface.co/datasets/yahyaabd/allstats-semantic-dataset-v4) at [06c3cf8](https://huggingface.co/datasets/yahyaabd/allstats-semantic-dataset-v4/tree/06c3cf8715472fba6be04302a12790a6bd80443a)
* Size: 88,250 training samples
* Columns: <code>query</code>, <code>doc</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | query | doc | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 11.38 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 14.48 tokens</li><li>max: 67 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.51</li><li>max: 1.0</li></ul> |
* Samples:
| query | doc | label |
|:-----------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>Industri teh Indonesia tahun 2021</code> | <code>Statistik Transportasi Laut 2014</code> | <code>0.1</code> |
| <code>Tahun berapa data pertumbuhan ekonomi Indonesia tersebut?</code> | <code>Nilai Tukar Petani (NTP) November 2023 sebesar 116,73 atau naik 0,82 persen. Harga Gabah Kering Panen di Tingkat Petani turun 1,94 persen dan Harga Beras Premium di Penggilingan turun 0,91 persen.</code> | <code>0.0</code> |
| <code>Kemiskinan di Indonesia Maret</code> | <code>2018 Feb Tenaga Kerja</code> | <code>0.1</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Evaluation Dataset
#### allstats-semantic-dataset-v4
* Dataset: [allstats-semantic-dataset-v4](https://huggingface.co/datasets/yahyaabd/allstats-semantic-dataset-v4) at [06c3cf8](https://huggingface.co/datasets/yahyaabd/allstats-semantic-dataset-v4/tree/06c3cf8715472fba6be04302a12790a6bd80443a)
* Size: 18,910 evaluation samples
* Columns: <code>query</code>, <code>doc</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | query | doc | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 5 tokens</li><li>mean: 11.35 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 14.25 tokens</li><li>max: 52 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.49</li><li>max: 1.0</li></ul> |
* Samples:
| query | doc | label |
|:--------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>nAalisis keuangam deas tshun 019</code> | <code>Statistik Migrasi Nusa Tenggara Barat Hasil Survei Penduduk Antar Sensus 2015</code> | <code>0.1</code> |
| <code>Data tanaman buah dan sayur Indonesia tahun 2016</code> | <code>Statistik Penduduk Lanjut Usia 2010</code> | <code>0.1</code> |
| <code>Pasar beras di Indonesia tahun 2018</code> | <code>Buletin Statistik Perdagangan Luar Negeri Ekspor Menurut Kelompok Komoditi dan Negara, April 2021</code> | <code>0.2</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `num_train_epochs`: 8
- `warmup_ratio`: 0.1
- `fp16`: True
- `dataloader_num_workers`: 4
- `load_best_model_at_end`: True
- `label_smoothing_factor`: 0.05
- `eval_on_start`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 8
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 4
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.05
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: True
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | allstats-semantic-mpnet-eval_spearman_cosine | allstats-semantic-mpnet-test_spearman_cosine |
|:----------:|:---------:|:-------------:|:---------------:|:--------------------------------------------:|:--------------------------------------------:|
| 0 | 0 | - | 0.0979 | 0.6119 | - |
| 0.0906 | 250 | 0.0646 | 0.0427 | 0.7249 | - |
| 0.1813 | 500 | 0.039 | 0.0324 | 0.7596 | - |
| 0.2719 | 750 | 0.032 | 0.0271 | 0.7860 | - |
| 0.3626 | 1000 | 0.0276 | 0.0255 | 0.7920 | - |
| 0.4532 | 1250 | 0.0264 | 0.0230 | 0.8072 | - |
| 0.5439 | 1500 | 0.0249 | 0.0222 | 0.8197 | - |
| 0.6345 | 1750 | 0.0226 | 0.0210 | 0.8200 | - |
| 0.7252 | 2000 | 0.0218 | 0.0209 | 0.8202 | - |
| 0.8158 | 2250 | 0.0208 | 0.0201 | 0.8346 | - |
| 0.9065 | 2500 | 0.0209 | 0.0211 | 0.8240 | - |
| 0.9971 | 2750 | 0.0211 | 0.0190 | 0.8170 | - |
| 1.0877 | 3000 | 0.0161 | 0.0182 | 0.8332 | - |
| 1.1784 | 3250 | 0.0158 | 0.0179 | 0.8393 | - |
| 1.2690 | 3500 | 0.0167 | 0.0189 | 0.8341 | - |
| 1.3597 | 3750 | 0.0152 | 0.0168 | 0.8371 | - |
| 1.4503 | 4000 | 0.0151 | 0.0165 | 0.8435 | - |
| 1.5410 | 4250 | 0.0143 | 0.0156 | 0.8365 | - |
| 1.6316 | 4500 | 0.0147 | 0.0157 | 0.8467 | - |
| 1.7223 | 4750 | 0.0138 | 0.0155 | 0.8501 | - |
| 1.8129 | 5000 | 0.0147 | 0.0154 | 0.8457 | - |
| 1.9036 | 5250 | 0.0137 | 0.0152 | 0.8498 | - |
| 1.9942 | 5500 | 0.0144 | 0.0143 | 0.8485 | - |
| 2.0848 | 5750 | 0.0108 | 0.0139 | 0.8439 | - |
| 2.1755 | 6000 | 0.01 | 0.0146 | 0.8563 | - |
| 2.2661 | 6250 | 0.011 | 0.0141 | 0.8558 | - |
| 2.3568 | 6500 | 0.0107 | 0.0144 | 0.8497 | - |
| 2.4474 | 6750 | 0.01 | 0.0138 | 0.8577 | - |
| 2.5381 | 7000 | 0.0097 | 0.0136 | 0.8585 | - |
| 2.6287 | 7250 | 0.0102 | 0.0135 | 0.8521 | - |
| 2.7194 | 7500 | 0.0106 | 0.0133 | 0.8537 | - |
| 2.8100 | 7750 | 0.0098 | 0.0133 | 0.8643 | - |
| 2.9007 | 8000 | 0.0105 | 0.0138 | 0.8543 | - |
| 2.9913 | 8250 | 0.009 | 0.0129 | 0.8555 | - |
| 3.0819 | 8500 | 0.0071 | 0.0121 | 0.8692 | - |
| 3.1726 | 8750 | 0.006 | 0.0120 | 0.8709 | - |
| 3.2632 | 9000 | 0.0078 | 0.0120 | 0.8660 | - |
| 3.3539 | 9250 | 0.0072 | 0.0122 | 0.8656 | - |
| 3.4445 | 9500 | 0.007 | 0.0123 | 0.8696 | - |
| 3.5352 | 9750 | 0.0075 | 0.0117 | 0.8707 | - |
| 3.6258 | 10000 | 0.0081 | 0.0115 | 0.8682 | - |
| 3.7165 | 10250 | 0.0083 | 0.0116 | 0.8617 | - |
| 3.8071 | 10500 | 0.0075 | 0.0116 | 0.8665 | - |
| 3.8978 | 10750 | 0.0077 | 0.0119 | 0.8733 | - |
| 3.9884 | 11000 | 0.008 | 0.0113 | 0.8678 | - |
| 4.0790 | 11250 | 0.0051 | 0.0110 | 0.8760 | - |
| 4.1697 | 11500 | 0.0052 | 0.0108 | 0.8729 | - |
| 4.2603 | 11750 | 0.0056 | 0.0108 | 0.8771 | - |
| 4.3510 | 12000 | 0.0052 | 0.0109 | 0.8793 | - |
| 4.4416 | 12250 | 0.0049 | 0.0109 | 0.8766 | - |
| 4.5323 | 12500 | 0.0055 | 0.0114 | 0.8742 | - |
| 4.6229 | 12750 | 0.0061 | 0.0108 | 0.8749 | - |
| 4.7136 | 13000 | 0.0058 | 0.0109 | 0.8833 | - |
| 4.8042 | 13250 | 0.0049 | 0.0108 | 0.8767 | - |
| 4.8949 | 13500 | 0.0046 | 0.0108 | 0.8839 | - |
| 4.9855 | 13750 | 0.0052 | 0.0104 | 0.8790 | - |
| 5.0761 | 14000 | 0.0041 | 0.0102 | 0.8826 | - |
| 5.1668 | 14250 | 0.004 | 0.0103 | 0.8775 | - |
| 5.2574 | 14500 | 0.0036 | 0.0102 | 0.8855 | - |
| 5.3481 | 14750 | 0.0037 | 0.0104 | 0.8841 | - |
| 5.4387 | 15000 | 0.0036 | 0.0101 | 0.8860 | - |
| 5.5294 | 15250 | 0.0043 | 0.0104 | 0.8852 | - |
| 5.6200 | 15500 | 0.004 | 0.0100 | 0.8856 | - |
| 5.7107 | 15750 | 0.0043 | 0.0101 | 0.8842 | - |
| 5.8013 | 16000 | 0.0043 | 0.0099 | 0.8835 | - |
| 5.8920 | 16250 | 0.0041 | 0.0099 | 0.8852 | - |
| 5.9826 | 16500 | 0.0036 | 0.0101 | 0.8866 | - |
| 6.0732 | 16750 | 0.0031 | 0.0100 | 0.8881 | - |
| 6.1639 | 17000 | 0.0031 | 0.0098 | 0.8880 | - |
| 6.2545 | 17250 | 0.0027 | 0.0098 | 0.8886 | - |
| 6.3452 | 17500 | 0.0032 | 0.0097 | 0.8868 | - |
| 6.4358 | 17750 | 0.0027 | 0.0097 | 0.8876 | - |
| 6.5265 | 18000 | 0.0031 | 0.0097 | 0.8893 | - |
| 6.6171 | 18250 | 0.0032 | 0.0096 | 0.8903 | - |
| 6.7078 | 18500 | 0.003 | 0.0096 | 0.8898 | - |
| 6.7984 | 18750 | 0.0029 | 0.0098 | 0.8907 | - |
| 6.8891 | 19000 | 0.003 | 0.0096 | 0.8896 | - |
| 6.9797 | 19250 | 0.0026 | 0.0096 | 0.8913 | - |
| 7.0703 | 19500 | 0.0024 | 0.0096 | 0.8921 | - |
| 7.1610 | 19750 | 0.0021 | 0.0097 | 0.8920 | - |
| 7.2516 | 20000 | 0.0023 | 0.0096 | 0.8910 | - |
| 7.3423 | 20250 | 0.002 | 0.0096 | 0.8920 | - |
| 7.4329 | 20500 | 0.0022 | 0.0096 | 0.8924 | - |
| 7.5236 | 20750 | 0.002 | 0.0097 | 0.8917 | - |
| 7.6142 | 21000 | 0.0024 | 0.0096 | 0.8923 | - |
| 7.7049 | 21250 | 0.0025 | 0.0095 | 0.8928 | - |
| 7.7955 | 21500 | 0.0022 | 0.0095 | 0.8931 | - |
| 7.8861 | 21750 | 0.0023 | 0.0095 | 0.8932 | - |
| **7.9768** | **22000** | **0.0022** | **0.0095** | **0.8934** | **-** |
| 8.0 | 22064 | - | - | - | 0.8932 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.1
- Transformers: 4.48.0
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "sentence-transformers/paraphrase-multilingual-mpnet-base-v2", "datasets": ["yahyaabd/allstats-semantic-dataset-v4"], "library_name": "sentence-transformers", "metrics": ["pearson_cosine", "spearman_cosine"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:88250", "loss:CosineSimilarityLoss"], "widget": [{"source_sentence": "Laporan ekspor Indonesia Juli 2020", "sentences": ["Statistik Produksi Kehutanan 2021", "Buletin Statistik Perdagangan Luar Negeri Ekspor Menurut HS, Juli 2020", "Statistik Politik 2017"]}, {"source_sentence": "Bulan apa yang dicatat data kunjungan wisatawan mancanegara?", "sentences": ["Indeks Tendensi Bisnis dan Indeks Tendensi Konsumen 2005", "Data NTP bulan Maret 2022.", "Kunjungan wisatawan mancanegara pada Oktober 2023 mencapai 978,50 ribu kunjungan, naik 33,27 persen (year-on-year)"]}, {"source_sentence": "Seberapa besar kenaikan upah nominal harian buruh tani nasional Januari 2016?", "sentences": ["Keadaan Angkatan Kerja di Indonesia Mei 2013", "Profil Pasar Gorontalo 2020", "Tingkat pengangguran terbuka (TPT) Agustus 2024 sebesar 5,3%."]}, {"source_sentence": "Ringkasan data statistik Indonesia 1997", "sentences": ["Statistik Upah 2007", "Harga konsumen bbrp jenis barang kelompok perumahan 2005", "Statistik Indonesia 1997"]}, {"source_sentence": "Pernikahan usia anak di Indonesia periode 2013-2015", "sentences": ["Jumlah penduduk Indonesia 2013-2015", "Indikator Ekonomi Desember 2006", "Indeks Tendensi Bisnis dan Indeks Tendensi Konsumen 2013"]}], "model-index": [{"name": "SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2", "results": [{"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "allstats semantic mpnet eval", "type": "allstats-semantic-mpnet-eval"}, "metrics": [{"type": "pearson_cosine", "value": 0.9714169395957917, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.8933550959155299, "name": "Spearman Cosine"}]}, {"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "allstats semantic mpnet test", "type": "allstats-semantic-mpnet-test"}, "metrics": [{"type": "pearson_cosine", "value": 0.9723087139367028, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.8932385415736595, "name": "Spearman Cosine"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION",
"SEMANTIC_SIMILARITY"
] | 42,514 |
eslamxm/arabert2arabert-finetuned-ar-xlsum
|
eslamxm
|
summarization
|
[
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"summarization",
"ar",
"arabert",
"Abstractive Summarization",
"generated_from_trainer",
"dataset:xlsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-06-06T22:31:07Z |
2022-06-07T09:34:31+00:00
| 21 | 0 |
---
datasets:
- xlsum
tags:
- summarization
- ar
- encoder-decoder
- arabert
- Abstractive Summarization
- generated_from_trainer
model-index:
- name: arabert2arabert-finetuned-ar-xlsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# arabert2arabert-finetuned-ar-xlsum
This model is a fine-tuned version of [](https://huggingface.co/) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 5.1557
- Rouge-1: 25.3
- Rouge-2: 10.46
- Rouge-l: 22.12
- Gen Len: 20.0
- Bertscore: 71.98
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 8
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# arabert2arabert-finetuned-ar-xlsum
This model is a fine-tuned version of [](https://huggingface.co/) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 5.1557
- Rouge-1: 25.3
- Rouge-2: 10.46
- Rouge-l: 22.12
- Gen Len: 20.0
- Bertscore: 71.98
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 8
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
{"datasets": ["xlsum"], "tags": ["summarization", "ar", "encoder-decoder", "arabert", "Abstractive Summarization", "generated_from_trainer"], "model-index": [{"name": "arabert2arabert-finetuned-ar-xlsum", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 42,515 |
pravin691983/fine-tunned-classification-bert-lg-25K-32
|
pravin691983
|
text-classification
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"autotrain",
"dataset:few-shot-learning-classification-bert-lg-25K-32/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-05-07T09:43:33Z |
2024-06-10T04:49:04+00:00
| 9 | 0 |
---
datasets:
- few-shot-learning-classification-bert-lg-25K-32/autotrain-data
tags:
- autotrain
- text-classification
widget:
- text: I love AutoTrain
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
# Publisher Info
- Publisher, PRAVIN SURESH TAWADE
- Co-Publisher, Dr.JAYA KRISHNA GUTHA
## Validation Metrics
loss: 0.176192045211792
f1_macro: 0.9413396091925337
f1_micro: 0.94135
f1_weighted: 0.9413396091925339
precision_macro: 0.941596847721329
precision_micro: 0.94135
precision_weighted: 0.941596847721329
recall_macro: 0.9413499999999999
recall_micro: 0.94135
recall_weighted: 0.94135
accuracy: 0.94135
## Data in depth
One of the potential business applications of few-shot text classification with the AG News dataset is in media and content companies. They could implement this technology to categorize news articles on world, sports, business, technology, and other topics with minimal labeled data. This few-shot model application would allow for more efficient management and retrieval of news content, improving user satisfaction with personalized news feed. Moreover, such a model will allow these companies to promptly adjust their classification to new categories or rapidly emerging topics in dynamic industries.
With a concern that the repetition of the source material may impair the perception of the results of my adaptation, I would prefer to avoid working with the same data I encountered during the course. Therefore, I would like to select a diverse text dataset where the number of the labelled examples available for each of the classes is limited. Additionally, in order to evaluate the effectiveness of the model, I would consider varying the domains and types of documents. The work will begin with the choice of the dataset, and the one I have selected is the AG’s News Corpus, which can be accessed on Hugging Face. In my study, I use this collection of news articles, divided into four primary classes: World, Sports, Business, and Sci/Tech. The sizes of the dataset are as follows: 30,000 training samples and 1,900 test samples for each of the classes.
- Dataset size: 31.3 MB
- Data Split: 127600 rows
- Data Fields:
- Text: A feature represented by a string.
- Label: A set of classification labels comprising World (0), Sports (1), Business (2), and Sci/Tech (3).
| null |
Non_BioNLP
|
# Model Trained Using AutoTrain
- Problem type: Text Classification
# Publisher Info
- Publisher, PRAVIN SURESH TAWADE
- Co-Publisher, Dr.JAYA KRISHNA GUTHA
## Validation Metrics
loss: 0.176192045211792
f1_macro: 0.9413396091925337
f1_micro: 0.94135
f1_weighted: 0.9413396091925339
precision_macro: 0.941596847721329
precision_micro: 0.94135
precision_weighted: 0.941596847721329
recall_macro: 0.9413499999999999
recall_micro: 0.94135
recall_weighted: 0.94135
accuracy: 0.94135
## Data in depth
One of the potential business applications of few-shot text classification with the AG News dataset is in media and content companies. They could implement this technology to categorize news articles on world, sports, business, technology, and other topics with minimal labeled data. This few-shot model application would allow for more efficient management and retrieval of news content, improving user satisfaction with personalized news feed. Moreover, such a model will allow these companies to promptly adjust their classification to new categories or rapidly emerging topics in dynamic industries.
With a concern that the repetition of the source material may impair the perception of the results of my adaptation, I would prefer to avoid working with the same data I encountered during the course. Therefore, I would like to select a diverse text dataset where the number of the labelled examples available for each of the classes is limited. Additionally, in order to evaluate the effectiveness of the model, I would consider varying the domains and types of documents. The work will begin with the choice of the dataset, and the one I have selected is the AG’s News Corpus, which can be accessed on Hugging Face. In my study, I use this collection of news articles, divided into four primary classes: World, Sports, Business, and Sci/Tech. The sizes of the dataset are as follows: 30,000 training samples and 1,900 test samples for each of the classes.
- Dataset size: 31.3 MB
- Data Split: 127600 rows
- Data Fields:
- Text: A feature represented by a string.
- Label: A set of classification labels comprising World (0), Sports (1), Business (2), and Sci/Tech (3).
|
{"datasets": ["few-shot-learning-classification-bert-lg-25K-32/autotrain-data"], "tags": ["autotrain", "text-classification"], "widget": [{"text": "I love AutoTrain"}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,516 |
RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf
|
RichardErkhov
| null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-10-16T21:34:34Z |
2024-10-17T00:24:37+00:00
| 161 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Bahasa-4b-chat - GGUF
- Model creator: https://huggingface.co/Bahasalab/
- Original model: https://huggingface.co/Bahasalab/Bahasa-4b-chat/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Bahasa-4b-chat.Q2_K.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q2_K.gguf) | Q2_K | 1.51GB |
| [Bahasa-4b-chat.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.IQ3_XS.gguf) | IQ3_XS | 1.66GB |
| [Bahasa-4b-chat.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.IQ3_S.gguf) | IQ3_S | 1.73GB |
| [Bahasa-4b-chat.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q3_K_S.gguf) | Q3_K_S | 1.73GB |
| [Bahasa-4b-chat.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.IQ3_M.gguf) | IQ3_M | 1.81GB |
| [Bahasa-4b-chat.Q3_K.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q3_K.gguf) | Q3_K | 1.89GB |
| [Bahasa-4b-chat.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q3_K_M.gguf) | Q3_K_M | 1.89GB |
| [Bahasa-4b-chat.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q3_K_L.gguf) | Q3_K_L | 2.03GB |
| [Bahasa-4b-chat.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.IQ4_XS.gguf) | IQ4_XS | 2.08GB |
| [Bahasa-4b-chat.Q4_0.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q4_0.gguf) | Q4_0 | 2.17GB |
| [Bahasa-4b-chat.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.IQ4_NL.gguf) | IQ4_NL | 2.18GB |
| [Bahasa-4b-chat.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q4_K_S.gguf) | Q4_K_S | 2.18GB |
| [Bahasa-4b-chat.Q4_K.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q4_K.gguf) | Q4_K | 2.29GB |
| [Bahasa-4b-chat.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q4_K_M.gguf) | Q4_K_M | 2.29GB |
| [Bahasa-4b-chat.Q4_1.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q4_1.gguf) | Q4_1 | 2.38GB |
| [Bahasa-4b-chat.Q5_0.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q5_0.gguf) | Q5_0 | 2.58GB |
| [Bahasa-4b-chat.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q5_K_S.gguf) | Q5_K_S | 2.58GB |
| [Bahasa-4b-chat.Q5_K.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q5_K.gguf) | Q5_K | 2.64GB |
| [Bahasa-4b-chat.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q5_K_M.gguf) | Q5_K_M | 2.64GB |
| [Bahasa-4b-chat.Q5_1.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q5_1.gguf) | Q5_1 | 2.79GB |
| [Bahasa-4b-chat.Q6_K.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q6_K.gguf) | Q6_K | 3.03GB |
| [Bahasa-4b-chat.Q8_0.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q8_0.gguf) | Q8_0 | 3.92GB |
Original model description:
---
language:
- id
license: other
license_name: tongyi-qianwen
---
# Bahasa-4b Model Report
## Model Name
**Bahasa-4b**
## Model Detail
Bahasa-4b is continued training from qwen-4b using 10 billion high quality text of Indonesian. The model outperforms some 4b, and even 7b models for Indonesian tasks.
## Model Developers
Bahasa AI
## Intended Use
This model is intended for various NLP tasks that require understanding and generating Indonesian language. It is suitable for applications such as question answering, sentiment analysis, document summarization, and more.
## Training Data
Bahasa-4b was trained on a 10 billion subset data of Indonesian dataset from a collected pool of 100 billion.
## Benchmarks
The following table shows the performance of Bahasa-4b compared to the models Sailor_4b and Mistral-7B-v0.1 across several benchmarks:
| Dataset | Version | Metric | Mode | Sailor_4b | Bahasa-4b-hf | Mistral-7B-v0.1 |
|----------------|---------|--------|------|-----------|--------------|-----------------|
| tydiqa-id | 0e9309 | EM | gen | 53.98 | 55.04 | 63.54 |
| tydiqa-id | 0e9309 | F1 | gen | 73.48 | 75.39 | 78.73 |
| xcopa-id | 36c11c | EM | ppl | 69.2 | 73.2 | 62.40 |
| xcopa-id | 36c11c | F1 | ppl | 69.2 | 73.2 | - |
| m3exam-id-ppl | ede415 | EM | ppl | 31.27 | 44.47 | 26.68 |
| belebele-id-ppl| 7fe030 | EM | ppl | 41.33 | 42.33 | 41.33 |
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"Bahasalab/Bahasa-4b-chat-v2",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Bahasalab/Bahasa-4b-chat")
messages = [
{"role": "system", "content": "Kamu adalah asisten yang membantu"},
{"role": "user", "content": "kamu siapa"}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
input_ids=model_inputs.input_ids,
attention_mask=model_inputs.attention_mask,
max_new_tokens=512,
eos_token_id=tokenizer.eos_token_id
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
This data demonstrates that Bahasa-4b consistently outperforms the Sailor_4b model in various Indonesian language tasks, showing improvements in both EM (Exact Match) and F1 scores across different datasets, and is competitive with the Mistral-7B-v0.1 model.
| null |
Non_BioNLP
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Bahasa-4b-chat - GGUF
- Model creator: https://huggingface.co/Bahasalab/
- Original model: https://huggingface.co/Bahasalab/Bahasa-4b-chat/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Bahasa-4b-chat.Q2_K.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q2_K.gguf) | Q2_K | 1.51GB |
| [Bahasa-4b-chat.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.IQ3_XS.gguf) | IQ3_XS | 1.66GB |
| [Bahasa-4b-chat.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.IQ3_S.gguf) | IQ3_S | 1.73GB |
| [Bahasa-4b-chat.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q3_K_S.gguf) | Q3_K_S | 1.73GB |
| [Bahasa-4b-chat.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.IQ3_M.gguf) | IQ3_M | 1.81GB |
| [Bahasa-4b-chat.Q3_K.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q3_K.gguf) | Q3_K | 1.89GB |
| [Bahasa-4b-chat.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q3_K_M.gguf) | Q3_K_M | 1.89GB |
| [Bahasa-4b-chat.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q3_K_L.gguf) | Q3_K_L | 2.03GB |
| [Bahasa-4b-chat.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.IQ4_XS.gguf) | IQ4_XS | 2.08GB |
| [Bahasa-4b-chat.Q4_0.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q4_0.gguf) | Q4_0 | 2.17GB |
| [Bahasa-4b-chat.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.IQ4_NL.gguf) | IQ4_NL | 2.18GB |
| [Bahasa-4b-chat.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q4_K_S.gguf) | Q4_K_S | 2.18GB |
| [Bahasa-4b-chat.Q4_K.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q4_K.gguf) | Q4_K | 2.29GB |
| [Bahasa-4b-chat.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q4_K_M.gguf) | Q4_K_M | 2.29GB |
| [Bahasa-4b-chat.Q4_1.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q4_1.gguf) | Q4_1 | 2.38GB |
| [Bahasa-4b-chat.Q5_0.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q5_0.gguf) | Q5_0 | 2.58GB |
| [Bahasa-4b-chat.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q5_K_S.gguf) | Q5_K_S | 2.58GB |
| [Bahasa-4b-chat.Q5_K.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q5_K.gguf) | Q5_K | 2.64GB |
| [Bahasa-4b-chat.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q5_K_M.gguf) | Q5_K_M | 2.64GB |
| [Bahasa-4b-chat.Q5_1.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q5_1.gguf) | Q5_1 | 2.79GB |
| [Bahasa-4b-chat.Q6_K.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q6_K.gguf) | Q6_K | 3.03GB |
| [Bahasa-4b-chat.Q8_0.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q8_0.gguf) | Q8_0 | 3.92GB |
Original model description:
---
language:
- id
license: other
license_name: tongyi-qianwen
---
# Bahasa-4b Model Report
## Model Name
**Bahasa-4b**
## Model Detail
Bahasa-4b is continued training from qwen-4b using 10 billion high quality text of Indonesian. The model outperforms some 4b, and even 7b models for Indonesian tasks.
## Model Developers
Bahasa AI
## Intended Use
This model is intended for various NLP tasks that require understanding and generating Indonesian language. It is suitable for applications such as question answering, sentiment analysis, document summarization, and more.
## Training Data
Bahasa-4b was trained on a 10 billion subset data of Indonesian dataset from a collected pool of 100 billion.
## Benchmarks
The following table shows the performance of Bahasa-4b compared to the models Sailor_4b and Mistral-7B-v0.1 across several benchmarks:
| Dataset | Version | Metric | Mode | Sailor_4b | Bahasa-4b-hf | Mistral-7B-v0.1 |
|----------------|---------|--------|------|-----------|--------------|-----------------|
| tydiqa-id | 0e9309 | EM | gen | 53.98 | 55.04 | 63.54 |
| tydiqa-id | 0e9309 | F1 | gen | 73.48 | 75.39 | 78.73 |
| xcopa-id | 36c11c | EM | ppl | 69.2 | 73.2 | 62.40 |
| xcopa-id | 36c11c | F1 | ppl | 69.2 | 73.2 | - |
| m3exam-id-ppl | ede415 | EM | ppl | 31.27 | 44.47 | 26.68 |
| belebele-id-ppl| 7fe030 | EM | ppl | 41.33 | 42.33 | 41.33 |
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"Bahasalab/Bahasa-4b-chat-v2",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Bahasalab/Bahasa-4b-chat")
messages = [
{"role": "system", "content": "Kamu adalah asisten yang membantu"},
{"role": "user", "content": "kamu siapa"}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
input_ids=model_inputs.input_ids,
attention_mask=model_inputs.attention_mask,
max_new_tokens=512,
eos_token_id=tokenizer.eos_token_id
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
This data demonstrates that Bahasa-4b consistently outperforms the Sailor_4b model in various Indonesian language tasks, showing improvements in both EM (Exact Match) and F1 scores across different datasets, and is competitive with the Mistral-7B-v0.1 model.
|
{}
|
task
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 42,517 |
andi611/distilbert-base-uncased-qa-with-ner
|
andi611
|
question-answering
|
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2021-07-19T01:20:54+00:00
| 43 | 0 |
---
datasets:
- conll2003
license: apache-2.0
tags:
- generated_from_trainer
model_index:
- name: distilbert-base-uncased-qa-with-ner
results:
- task:
name: Question Answering
type: question-answering
dataset:
name: conll2003
type: conll2003
args: conll2003
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-qa-with-ner
This model is a fine-tuned version of [andi611/distilbert-base-uncased-qa](https://huggingface.co/andi611/distilbert-base-uncased-qa) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-qa-with-ner
This model is a fine-tuned version of [andi611/distilbert-base-uncased-qa](https://huggingface.co/andi611/distilbert-base-uncased-qa) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
|
{"datasets": ["conll2003"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "model_index": [{"name": "distilbert-base-uncased-qa-with-ner", "results": [{"task": {"name": "Question Answering", "type": "question-answering"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}}]}]}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,518 |
joyebright/Top4-with-mixing
|
joyebright
|
translation
|
[
"translation",
"en",
"fr",
"dataset:wmt",
"dataset:iwslt2014",
"license:apache-2.0",
"region:us"
] | 2022-03-02T23:29:05Z |
2021-10-14T10:09:56+00:00
| 0 | 0 |
---
datasets:
- wmt
- iwslt2014
language:
- en
- fr
license: apache-2.0
metrics:
- bleu
- ter
- chrf2
- sacrebleu
tags:
- translation
---
| null |
Non_BioNLP
|
{"datasets": ["wmt", "iwslt2014"], "language": ["en", "fr"], "license": "apache-2.0", "metrics": ["bleu", "ter", "chrf2", "sacrebleu"], "tags": ["translation"]}
|
task
|
[
"TRANSLATION"
] | 42,519 |
|
prithivMLmods/Megatron-Corpus-14B-Exp.v2-GGUF
|
prithivMLmods
|
text-generation
|
[
"transformers",
"gguf",
"corpus",
"reasoner",
"text-generation",
"en",
"base_model:prithivMLmods/Megatron-Corpus-14B-Exp.v2",
"base_model:quantized:prithivMLmods/Megatron-Corpus-14B-Exp.v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | 2025-02-06T13:34:24Z |
2025-02-07T10:45:21+00:00
| 29 | 1 |
---
base_model:
- prithivMLmods/Megatron-Corpus-14B-Exp.v2
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- corpus
- reasoner
---

# **Megatron-Corpus-14B-Exp.v2-GGUF**
Megatron-Corpus-14B-Exp.v2-GGUF is based on the Qwen 2.5 14B modality architecture, designed to enhance the reasoning capabilities of 14B-parameter models. It has been fine-tuned on a synthetic dataset based on math corpus, further optimizing its chain-of-thought (CoT) reasoning and logical problem-solving abilities. The model demonstrates significant improvements in context understanding, structured data processing, and long-context comprehension, making it ideal for complex reasoning tasks, instruction-following, and text generation.
### **Key Improvements**
1. **Advanced Reasoning & Logic**: Optimized for multi-step problem-solving, logical deduction, and contextual analysis.
2. **Fine-Tuned Instruction Following**: Generates precise responses, structured outputs (e.g., JSON), and extended long-form text (8K+ tokens).
3. **Greater Adaptability**: Excels in role-playing, multi-turn dialogues, and diverse system prompts.
4. **Long-Context Support**: Handles up to **128K tokens** and generates up to **8K tokens** per output.
5. **Multilingual Proficiency**: Supports over **29 languages**, including Chinese, English, French, Spanish, Portuguese, German, and more.
### **Quickstart with Transformers**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Megatron-Corpus-14B-Exp.v2"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Explain the concept of logical reasoning in AI."
messages = [
{"role": "system", "content": "You are an expert AI assistant specialized in reasoning and logic."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
### **Intended Use**
- **Advanced Logical & Analytical Reasoning**: Designed for problem-solving, multi-step deductions, and cognitive reasoning tasks.
- **Mathematical & Scientific Computation**: Supports theorem proving, complex calculations, and scientific knowledge retrieval.
- **Code Generation & Debugging**: Generates optimized code, detects errors, and improves programming workflows.
- **Structured Data Analysis**: Processes tables, JSON, and structured formats for data-centric applications.
- **Multilingual Reasoning & Translation**: High proficiency across **29+ languages** for international applications.
- **Extended Text Generation**: Capable of generating research papers, instructional guides, and in-depth reports.
### **Limitations**
1. **High Computational Requirements**: Due to its **14B parameters** and **128K context support**, it requires powerful GPUs or TPUs for efficient inference.
2. **Language-Specific Variability**: Performance may differ across supported languages, especially for low-resource languages.
3. **Potential Error Accumulation**: Long-form text generation can introduce inconsistencies over extended outputs.
4. **Limited Real-World Awareness**: Knowledge is restricted to training data and may not reflect recent world events.
5. **Prompt Sensitivity**: The quality of responses depends on the specificity and clarity of the input prompt.
| null |
Non_BioNLP
|

# **Megatron-Corpus-14B-Exp.v2-GGUF**
Megatron-Corpus-14B-Exp.v2-GGUF is based on the Qwen 2.5 14B modality architecture, designed to enhance the reasoning capabilities of 14B-parameter models. It has been fine-tuned on a synthetic dataset based on math corpus, further optimizing its chain-of-thought (CoT) reasoning and logical problem-solving abilities. The model demonstrates significant improvements in context understanding, structured data processing, and long-context comprehension, making it ideal for complex reasoning tasks, instruction-following, and text generation.
### **Key Improvements**
1. **Advanced Reasoning & Logic**: Optimized for multi-step problem-solving, logical deduction, and contextual analysis.
2. **Fine-Tuned Instruction Following**: Generates precise responses, structured outputs (e.g., JSON), and extended long-form text (8K+ tokens).
3. **Greater Adaptability**: Excels in role-playing, multi-turn dialogues, and diverse system prompts.
4. **Long-Context Support**: Handles up to **128K tokens** and generates up to **8K tokens** per output.
5. **Multilingual Proficiency**: Supports over **29 languages**, including Chinese, English, French, Spanish, Portuguese, German, and more.
### **Quickstart with Transformers**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Megatron-Corpus-14B-Exp.v2"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Explain the concept of logical reasoning in AI."
messages = [
{"role": "system", "content": "You are an expert AI assistant specialized in reasoning and logic."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
### **Intended Use**
- **Advanced Logical & Analytical Reasoning**: Designed for problem-solving, multi-step deductions, and cognitive reasoning tasks.
- **Mathematical & Scientific Computation**: Supports theorem proving, complex calculations, and scientific knowledge retrieval.
- **Code Generation & Debugging**: Generates optimized code, detects errors, and improves programming workflows.
- **Structured Data Analysis**: Processes tables, JSON, and structured formats for data-centric applications.
- **Multilingual Reasoning & Translation**: High proficiency across **29+ languages** for international applications.
- **Extended Text Generation**: Capable of generating research papers, instructional guides, and in-depth reports.
### **Limitations**
1. **High Computational Requirements**: Due to its **14B parameters** and **128K context support**, it requires powerful GPUs or TPUs for efficient inference.
2. **Language-Specific Variability**: Performance may differ across supported languages, especially for low-resource languages.
3. **Potential Error Accumulation**: Long-form text generation can introduce inconsistencies over extended outputs.
4. **Limited Real-World Awareness**: Knowledge is restricted to training data and may not reflect recent world events.
5. **Prompt Sensitivity**: The quality of responses depends on the specificity and clarity of the input prompt.
|
{"base_model": ["prithivMLmods/Megatron-Corpus-14B-Exp.v2"], "language": ["en"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["corpus", "reasoner"]}
|
task
|
[
"TRANSLATION"
] | 42,520 |
HPLT/hplt_bert_base_eo
|
HPLT
|
fill-mask
|
[
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"eo",
"dataset:HPLT/hplt_monolingual_v1_2",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | 2024-04-22T01:16:06Z |
2024-11-24T19:13:28+00:00
| 13 | 0 |
---
datasets:
- HPLT/hplt_monolingual_v1_2
language:
- eo
license: apache-2.0
tags:
- BERT
- HPLT
- encoder
inference: false
---
# HPLT Bert for Esperanto
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language model. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_eo")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_eo", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Intermediate checkpoints
We are releasing 10 intermediate checkpoints for each model at intervals of every 3125 training steps in separate branches. The naming convention is `stepXXX`: for example, `step18750`.
You can load a specific model revision with `transformers` using the argument `revision`:
```python
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_eo", revision="step21875", trust_remote_code=True)
```
You can access all the revisions for the models with the following code:
```python
from huggingface_hub import list_repo_refs
out = list_repo_refs("HPLT/hplt_bert_base_eo")
print([b.name for b in out.branches])
```
## Cite us
```bibtex
@inproceedings{samuel-etal-2023-trained,
title = "Trained on 100 million words and still in shape: {BERT} meets {B}ritish {N}ational {C}orpus",
author = "Samuel, David and
Kutuzov, Andrey and
{\O}vrelid, Lilja and
Velldal, Erik",
editor = "Vlachos, Andreas and
Augenstein, Isabelle",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2023",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-eacl.146",
doi = "10.18653/v1/2023.findings-eacl.146",
pages = "1954--1974"
})
```
```bibtex
@inproceedings{de-gibert-etal-2024-new-massive,
title = "A New Massive Multilingual Dataset for High-Performance Language Technologies",
author = {de Gibert, Ona and
Nail, Graeme and
Arefyev, Nikolay and
Ba{\~n}{\'o}n, Marta and
van der Linde, Jelmer and
Ji, Shaoxiong and
Zaragoza-Bernabeu, Jaume and
Aulamo, Mikko and
Ram{\'\i}rez-S{\'a}nchez, Gema and
Kutuzov, Andrey and
Pyysalo, Sampo and
Oepen, Stephan and
Tiedemann, J{\"o}rg},
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.100",
pages = "1116--1128",
abstract = "We present the HPLT (High Performance Language Technologies) language resources, a new massive multilingual dataset including both monolingual and bilingual corpora extracted from CommonCrawl and previously unused web crawls from the Internet Archive. We describe our methods for data acquisition, management and processing of large corpora, which rely on open-source software tools and high-performance computing. Our monolingual collection focuses on low- to medium-resourced languages and covers 75 languages and a total of {\mbox{$\approx$}} 5.6 trillion word tokens de-duplicated on the document level. Our English-centric parallel corpus is derived from its monolingual counterpart and covers 18 language pairs and more than 96 million aligned sentence pairs with roughly 1.4 billion English tokens. The HPLT language resources are one of the largest open text corpora ever released, providing a great resource for language modeling and machine translation training. We publicly release the corpora, the software, and the tools used in this work.",
}
```
| null |
Non_BioNLP
|
# HPLT Bert for Esperanto
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language model. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_eo")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_eo", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Intermediate checkpoints
We are releasing 10 intermediate checkpoints for each model at intervals of every 3125 training steps in separate branches. The naming convention is `stepXXX`: for example, `step18750`.
You can load a specific model revision with `transformers` using the argument `revision`:
```python
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_eo", revision="step21875", trust_remote_code=True)
```
You can access all the revisions for the models with the following code:
```python
from huggingface_hub import list_repo_refs
out = list_repo_refs("HPLT/hplt_bert_base_eo")
print([b.name for b in out.branches])
```
## Cite us
```bibtex
@inproceedings{samuel-etal-2023-trained,
title = "Trained on 100 million words and still in shape: {BERT} meets {B}ritish {N}ational {C}orpus",
author = "Samuel, David and
Kutuzov, Andrey and
{\O}vrelid, Lilja and
Velldal, Erik",
editor = "Vlachos, Andreas and
Augenstein, Isabelle",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2023",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-eacl.146",
doi = "10.18653/v1/2023.findings-eacl.146",
pages = "1954--1974"
})
```
```bibtex
@inproceedings{de-gibert-etal-2024-new-massive,
title = "A New Massive Multilingual Dataset for High-Performance Language Technologies",
author = {de Gibert, Ona and
Nail, Graeme and
Arefyev, Nikolay and
Ba{\~n}{\'o}n, Marta and
van der Linde, Jelmer and
Ji, Shaoxiong and
Zaragoza-Bernabeu, Jaume and
Aulamo, Mikko and
Ram{\'\i}rez-S{\'a}nchez, Gema and
Kutuzov, Andrey and
Pyysalo, Sampo and
Oepen, Stephan and
Tiedemann, J{\"o}rg},
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.100",
pages = "1116--1128",
abstract = "We present the HPLT (High Performance Language Technologies) language resources, a new massive multilingual dataset including both monolingual and bilingual corpora extracted from CommonCrawl and previously unused web crawls from the Internet Archive. We describe our methods for data acquisition, management and processing of large corpora, which rely on open-source software tools and high-performance computing. Our monolingual collection focuses on low- to medium-resourced languages and covers 75 languages and a total of {\mbox{$\approx$}} 5.6 trillion word tokens de-duplicated on the document level. Our English-centric parallel corpus is derived from its monolingual counterpart and covers 18 language pairs and more than 96 million aligned sentence pairs with roughly 1.4 billion English tokens. The HPLT language resources are one of the largest open text corpora ever released, providing a great resource for language modeling and machine translation training. We publicly release the corpora, the software, and the tools used in this work.",
}
```
|
{"datasets": ["HPLT/hplt_monolingual_v1_2"], "language": ["eo"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "inference": false}
|
task
|
[
"TRANSLATION"
] | 42,521 |
Johnson8187/Vision_or_not
|
Johnson8187
|
text-classification
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"zh",
"en",
"base_model:joeddav/xlm-roberta-large-xnli",
"base_model:finetune:joeddav/xlm-roberta-large-xnli",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-12-18T05:18:27Z |
2024-12-18T14:09:03+00:00
| 10 | 0 |
---
base_model:
- joeddav/xlm-roberta-large-xnli
language:
- zh
- en
library_name: transformers
license: mit
pipeline_tag: text-classification
---
# Vision_or_not: A Multimodal Text Classification Model
Vision_or_not is a text classification model designed to determine whether a given sentence requires visual processing or not. This model is part of a multimodal framework, enabling efficient analysis of text and its potential need for visual processing, useful in applications like visual question answering (VQA) and other AI systems that require understanding both textual and visual content.
# Model Overview
This model classifies sentences into two categories:
Requires Visual Processing (1): The sentence contains content that necessitates additional visual information for full understanding.
Does Not Require Visual Processing (0): The sentence is self-contained and can be processed without any visual input.
The model is fine-tuned for sequence classification tasks and provides a straightforward interface to make predictions.
# Fine-Tuning Information
This model is fine-tuned based on the mDeBERTa-v3-base-mnli-xn model, which is a multilingual version of DeBERTa (Decoding-enhanced BERT with disentangled attention). The fine-tuning data used is primarily in Traditional Chinese, which makes the model well-suited for processing texts in this language. However, the model has been tested and can also perform well with English inputs.
Base Model: [joeddav/xlm-roberta-large-xnli](https://huggingface.co/joeddav/xlm-roberta-large-xnli)
Fine-Tuning Data: Traditional Chinese text data
# Quick Start
To use the Vision_or_not model, you will need to install the following Python libraries:
```
pip install transformers torch
```
To use the model for making predictions, simply load the model and tokenizer, then pass your text to the prediction function. Below is an example code for usage:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
label_mapping = {
0: "No need for visual processing",
1: "Requires visual processing",
}
def predict_emotion(text, model_path="Johnson8187/Vision_or_not"):
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForSequenceClassification.from_pretrained(model_path).to(device)
# Tokenize the input text
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True).to(device)
# Perform the prediction
with torch.no_grad():
outputs = model(**inputs)
# Get predicted class
predicted_class = torch.argmax(outputs.logits).item()
predicted_label = label_mapping[predicted_class]
return predicted_label
if __name__ == "__main__":
# Example usage
test_texts = [
"Hello, how are you?",
]
for text in test_texts:
prediction = predict_emotion(text)
print(f"Text: {text}")
print(f"Prediction: {prediction}\n")
```
# Example Output
For the input text "Hello, how are you?", the model might output:
```
Text: Hello, how are you?
Prediction: No need for visual processing
```
| null |
Non_BioNLP
|
# Vision_or_not: A Multimodal Text Classification Model
Vision_or_not is a text classification model designed to determine whether a given sentence requires visual processing or not. This model is part of a multimodal framework, enabling efficient analysis of text and its potential need for visual processing, useful in applications like visual question answering (VQA) and other AI systems that require understanding both textual and visual content.
# Model Overview
This model classifies sentences into two categories:
Requires Visual Processing (1): The sentence contains content that necessitates additional visual information for full understanding.
Does Not Require Visual Processing (0): The sentence is self-contained and can be processed without any visual input.
The model is fine-tuned for sequence classification tasks and provides a straightforward interface to make predictions.
# Fine-Tuning Information
This model is fine-tuned based on the mDeBERTa-v3-base-mnli-xn model, which is a multilingual version of DeBERTa (Decoding-enhanced BERT with disentangled attention). The fine-tuning data used is primarily in Traditional Chinese, which makes the model well-suited for processing texts in this language. However, the model has been tested and can also perform well with English inputs.
Base Model: [joeddav/xlm-roberta-large-xnli](https://huggingface.co/joeddav/xlm-roberta-large-xnli)
Fine-Tuning Data: Traditional Chinese text data
# Quick Start
To use the Vision_or_not model, you will need to install the following Python libraries:
```
pip install transformers torch
```
To use the model for making predictions, simply load the model and tokenizer, then pass your text to the prediction function. Below is an example code for usage:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
label_mapping = {
0: "No need for visual processing",
1: "Requires visual processing",
}
def predict_emotion(text, model_path="Johnson8187/Vision_or_not"):
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForSequenceClassification.from_pretrained(model_path).to(device)
# Tokenize the input text
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True).to(device)
# Perform the prediction
with torch.no_grad():
outputs = model(**inputs)
# Get predicted class
predicted_class = torch.argmax(outputs.logits).item()
predicted_label = label_mapping[predicted_class]
return predicted_label
if __name__ == "__main__":
# Example usage
test_texts = [
"Hello, how are you?",
]
for text in test_texts:
prediction = predict_emotion(text)
print(f"Text: {text}")
print(f"Prediction: {prediction}\n")
```
# Example Output
For the input text "Hello, how are you?", the model might output:
```
Text: Hello, how are you?
Prediction: No need for visual processing
```
|
{"base_model": ["joeddav/xlm-roberta-large-xnli"], "language": ["zh", "en"], "library_name": "transformers", "license": "mit", "pipeline_tag": "text-classification"}
|
task
|
[
"TEXT_CLASSIFICATION",
"QUESTION_ANSWERING"
] | 42,522 |
MultiBertGunjanPatrick/multiberts-seed-3-1900k
|
MultiBertGunjanPatrick
| null |
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2021-10-04T05:09:16+00:00
| 102 | 0 |
---
datasets:
- bookcorpus
- wikipedia
language: en
license: apache-2.0
tags:
- exbert
- multiberts
- multiberts-seed-3
---
# MultiBERTs Seed 3 Checkpoint 1900k (uncased)
Seed 3 intermediate checkpoint 1900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-1900k')
model = BertModel.from_pretrained("multiberts-seed-3-1900k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| null |
Non_BioNLP
|
# MultiBERTs Seed 3 Checkpoint 1900k (uncased)
Seed 3 intermediate checkpoint 1900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-1900k')
model = BertModel.from_pretrained("multiberts-seed-3-1900k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
{"datasets": ["bookcorpus", "wikipedia"], "language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-3"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,523 |
Helsinki-NLP/opus-mt-wls-sv
|
Helsinki-NLP
|
translation
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"wls",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2023-08-16T12:08:51+00:00
| 361 | 0 |
---
license: apache-2.0
tags:
- translation
---
### opus-mt-wls-sv
* source languages: wls
* target languages: sv
* OPUS readme: [wls-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/wls-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/wls-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/wls-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/wls-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.wls.sv | 23.8 | 0.408 |
| null |
Non_BioNLP
|
### opus-mt-wls-sv
* source languages: wls
* target languages: sv
* OPUS readme: [wls-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/wls-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/wls-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/wls-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/wls-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.wls.sv | 23.8 | 0.408 |
|
{"license": "apache-2.0", "tags": ["translation"]}
|
task
|
[
"TRANSLATION"
] | 42,524 |
TheBloke/bagel-7B-v0.1-GPTQ
|
TheBloke
|
text-generation
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:ai2_arc",
"dataset:unalignment/spicy-3.1",
"dataset:codeparrot/apps",
"dataset:facebook/belebele",
"dataset:boolq",
"dataset:jondurbin/cinematika-v0.1",
"dataset:drop",
"dataset:lmsys/lmsys-chat-1m",
"dataset:TIGER-Lab/MathInstruct",
"dataset:cais/mmlu",
"dataset:Muennighoff/natural-instructions",
"dataset:openbookqa",
"dataset:piqa",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:cakiki/rosetta-code",
"dataset:Open-Orca/SlimOrca",
"dataset:spider",
"dataset:squad_v2",
"dataset:migtissera/Synthia-v1.3",
"dataset:datasets/winogrande",
"base_model:jondurbin/bagel-7b-v0.1",
"base_model:quantized:jondurbin/bagel-7b-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | 2023-12-13T18:34:26Z |
2023-12-13T19:02:56+00:00
| 40 | 0 |
---
base_model: jondurbin/bagel-7b-v0.1
datasets:
- ai2_arc
- unalignment/spicy-3.1
- codeparrot/apps
- facebook/belebele
- boolq
- jondurbin/cinematika-v0.1
- drop
- lmsys/lmsys-chat-1m
- TIGER-Lab/MathInstruct
- cais/mmlu
- Muennighoff/natural-instructions
- openbookqa
- piqa
- Vezora/Tested-22k-Python-Alpaca
- cakiki/rosetta-code
- Open-Orca/SlimOrca
- spider
- squad_v2
- migtissera/Synthia-v1.3
- datasets/winogrande
license: apache-2.0
model_name: Bagel 7B v0.1
inference: false
model_creator: Jon Durbin
model_type: mistral
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Bagel 7B v0.1 - GPTQ
- Model creator: [Jon Durbin](https://huggingface.co/jondurbin)
- Original model: [Bagel 7B v0.1](https://huggingface.co/jondurbin/bagel-7b-v0.1)
<!-- description start -->
# Description
This repo contains GPTQ model files for [Jon Durbin's Bagel 7B v0.1](https://huggingface.co/jondurbin/bagel-7b-v0.1).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/bagel-7B-v0.1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/bagel-7B-v0.1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/bagel-7B-v0.1-GGUF)
* [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/bagel-7b-v0.1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/bagel-7B-v0.1-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.16 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/bagel-7B-v0.1-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.57 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/bagel-7B-v0.1-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.52 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/bagel-7B-v0.1-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.68 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/bagel-7B-v0.1-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 8.17 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/bagel-7B-v0.1-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.29 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/bagel-7B-v0.1-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/bagel-7B-v0.1-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `bagel-7B-v0.1-GPTQ`:
```shell
mkdir bagel-7B-v0.1-GPTQ
huggingface-cli download TheBloke/bagel-7B-v0.1-GPTQ --local-dir bagel-7B-v0.1-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir bagel-7B-v0.1-GPTQ
huggingface-cli download TheBloke/bagel-7B-v0.1-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir bagel-7B-v0.1-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir bagel-7B-v0.1-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/bagel-7B-v0.1-GPTQ --local-dir bagel-7B-v0.1-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/bagel-7B-v0.1-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/bagel-7B-v0.1-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/bagel-7B-v0.1-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `bagel-7B-v0.1-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/bagel-7B-v0.1-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/bagel-7B-v0.1-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Jon Durbin's Bagel 7B v0.1
# A bagel, with everything (except DPO)

## Overview
This is the pre-DPO version of the mistral-7b model fine-tuned with https://github.com/jondurbin/bagel
You probably want the higher performing model that underwent DPO: https://huggingface.co/jondurbin/bagel-dpo-7b-v0.1
The only benefit to this model is that it is less "truthful", for roleplaying and other types of scenarios that may benefit more from the SFT-only tune.
## Data selection.
The first step in the process is creating a dataset.
In this case, we're actually creating a composite dataset, consisting of both supervised fine-tuning data (SFT) and direct preference optimization (DPO) data.
All instruction data, that is, data that is not plain text (like project Gutenberg and items from Cinematika) or DPO, is converted into ShareGPT format so it's easier to work with.
See the corresponding code in `bagel/data_sources/*.py` in the repo linked above for full implementation for each data source.
Deduplication is done by creating a uuid v5 of the instruction/text, then only adding items not previously seen (where datasets are loaded in order of the confidence score I assign them).
This means that if an instruction is in data source "Foo" with confidence 4 as well as in data source "Bar" with confidence score 2, only the entry from "Foo" will be taken.
### SFT data sources
*Yes, you will see benchmark names in the list, but this only uses the train splits, and a decontamination by cosine similarity is performed at the end as a sanity check*
- [ai2_arc](https://huggingface.co/datasets/ai2_arc)
- Abstraction and reasoning dataset, useful in measuring "intelligence" to a certain extent.
- [airoboros](https://huggingface.co/datasets/unalignment/spicy-3.1)
- Variety of categories of synthetic instructions generated by gpt-4.
- [apps](https://huggingface.co/datasets/codeparrot/apps)
- Python coding dataset with 10k problems.
- [belebele](https://huggingface.co/datasets/facebook/belebele)
- Multi-lingual reading comprehension dataset.
- [boolq](https://huggingface.co/datasets/boolq)
- Corpus of yes/no questions (which can be surprisingly difficult for AI to answer apparently?)
- [cinematika](https://huggingface.co/datasets/jondurbin/cinematika-v0.1) (instruction and plain text)
- RP-style data synthesized from movie scripts so the model isn't quite as boring as it otherwise would be.
- [drop](https://huggingface.co/datasets/drop)
- More reading comprehension.
- [gutenberg](https://www.gutenberg.org/) (plain text)
- Books/plain text, again to make the model less boring, only a handful of examples supported by [chapterize](https://github.com/JonathanReeve/chapterize)
- [lmsys_chat_1m](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) (only gpt-4 items, also used for DPO)
- Chats collected by the lmsys chat arena, containing a wide variety of chats with various models.
- [mathinstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
- Composite dataset with a variety of math-related tasks and problem/question formats.
- [mmlu](https://huggingface.co/datasets/cais/mmlu)
- Massive Multitask Language Understanding - a wide variety of questions about various subject matters.
- [natural_instructions](https://huggingface.co/datasets/Muennighoff/natural-instructions)
- Millions of instructions from 1600+ task categories (sampled down substantially, stratified by task type)
- [openbookqa](https://huggingface.co/datasets/openbookqa)
- Question answering dataset.
- [piqa](https://huggingface.co/datasets/piqa)
- Phyiscal interaction question answering.
- [python_alpaca](https://huggingface.co/datasets/Vezora/Tested-22k-Python-Alpaca)
- Python instruction response pairs, validated as functional.
- [rosetta_code](https://huggingface.co/datasets/cakiki/rosetta-code)
- Code problems and solutions in a variety of programming languages taken from rosettacode.org.
- [slimorca](https://huggingface.co/datasets/Open-Orca/SlimOrca)
- Collection of ~500k gpt-4 verified chats from OpenOrca.
- [spider](https://huggingface.co/datasets/spider)
- SQL-targeted dataset.
- [squad_v2](https://huggingface.co/datasets/squad_v2)
- Contextual question answering (RAG).
- [synthia](https://huggingface.co/datasets/migtissera/Synthia-v1.3)
- GPT-4 generated data using advanced prompting from Migel Tissera.
- [winogrande](https://huggingface.co/datasets/winogrande)
- Fill in the blank style prompts.
Only the train splits were used (if a split was provided), and an additional pass of decontamination is performed using approximate nearest neighbor search (via faiss).
## Prompt formatting
In sticking with the theme of the bagel, I didn't want to use a single prompt format, so I used 4 - vicuna, llama-2, alpaca, and chat-ml (sorta).
I also didn't want to randomly select a single prompt format for each item (hoping each instruction would generalize more when used in a variety of prompt formats), so each instruction is actually converted into every prompt format.
This means each epoch of our fine-tune is really basically 4 epochs. So, for the fine-tunes, I would recommend only doing 1 epoch (or 0.75 epochs). I am testing with a single epoch using a relatively low learning rate.
### Alpaca (sort of)
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:
```
The main difference here is that because of the dataset formatting and variety of data sources, it would have been much to tedious to add an `### Input:` block, so the inputs are just in the instruction section.
### Vicuna
```
{system prompt, if provided, randomly defaulting to "A chat between a user and an unbiased, uncensored assistant."}
USER: {instruction}
ASSISTANT:
```
### ChatML (sort of)
I don't really understand the point of having special tokens for `<|im_start|>` and `<|im_end|>`, because in practice they just act as BOS and EOS tokens (but, please correct me if I'm wrong).
So, instead of:
```text
{bos}<|im_start|>{role}
{text}
<|im_end|>{eos}
```
I just changed it to:
```text
{bos}{role}
{text}
{eos}
```
In practice, this would mean tokenization code like such:
```python
tokenizer = AutoTokenizer.from_pretrained('mistralai/mistral-7b-v0.1')
input_str = f"""system
You are a goat.
{tokenizer.eos_token}
{tokenizer.bos_token}user
Tell me how to fry an egg.
{tokenizer.eos_token}
{tokenizer.bos_token}assistant
"""
inputs = tokenizer(input_str, return_tensors="pt")
```
If you *really* want to use `<|im_start|>` and `<|im_end|>`, just update your `tokenizer_config.json` to use `<|im_start|>` instead of `<s>` and `<|im_end|>` instead of `</s>` and when tokenizing. And if you still don't like what I've done to this chat-ml-ish format, feel free to cry into your pillow or fork the code and do a new fine-tune.
### Llama-2 chat
```
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]
```
### Fine-tune
*Note: I actually used my fork of [qlora](https://github.com/jondurbin/qlora)'s `train.py` for this, but I'm porting it to a minified version here, not tested yet!*
*More notes: I stopped the fine-tune around 50% because of budget constraints - it's a lot of data...*
```bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.json
```
Deepspeed configuration:
```json
{
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"bf16": {
"enabled": true
},
"zero_optimization": {
"stage": 2,
"contiguous_gradients": true,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 5e8,
"allgather_bucket_size": 5e8
}
}
```
| null |
Non_BioNLP
|
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Bagel 7B v0.1 - GPTQ
- Model creator: [Jon Durbin](https://huggingface.co/jondurbin)
- Original model: [Bagel 7B v0.1](https://huggingface.co/jondurbin/bagel-7b-v0.1)
<!-- description start -->
# Description
This repo contains GPTQ model files for [Jon Durbin's Bagel 7B v0.1](https://huggingface.co/jondurbin/bagel-7b-v0.1).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/bagel-7B-v0.1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/bagel-7B-v0.1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/bagel-7B-v0.1-GGUF)
* [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/bagel-7b-v0.1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/bagel-7B-v0.1-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.16 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/bagel-7B-v0.1-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.57 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/bagel-7B-v0.1-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.52 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/bagel-7B-v0.1-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.68 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/bagel-7B-v0.1-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 8.17 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/bagel-7B-v0.1-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.29 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/bagel-7B-v0.1-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/bagel-7B-v0.1-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `bagel-7B-v0.1-GPTQ`:
```shell
mkdir bagel-7B-v0.1-GPTQ
huggingface-cli download TheBloke/bagel-7B-v0.1-GPTQ --local-dir bagel-7B-v0.1-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir bagel-7B-v0.1-GPTQ
huggingface-cli download TheBloke/bagel-7B-v0.1-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir bagel-7B-v0.1-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir bagel-7B-v0.1-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/bagel-7B-v0.1-GPTQ --local-dir bagel-7B-v0.1-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/bagel-7B-v0.1-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/bagel-7B-v0.1-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/bagel-7B-v0.1-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `bagel-7B-v0.1-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/bagel-7B-v0.1-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/bagel-7B-v0.1-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Jon Durbin's Bagel 7B v0.1
# A bagel, with everything (except DPO)

## Overview
This is the pre-DPO version of the mistral-7b model fine-tuned with https://github.com/jondurbin/bagel
You probably want the higher performing model that underwent DPO: https://huggingface.co/jondurbin/bagel-dpo-7b-v0.1
The only benefit to this model is that it is less "truthful", for roleplaying and other types of scenarios that may benefit more from the SFT-only tune.
## Data selection.
The first step in the process is creating a dataset.
In this case, we're actually creating a composite dataset, consisting of both supervised fine-tuning data (SFT) and direct preference optimization (DPO) data.
All instruction data, that is, data that is not plain text (like project Gutenberg and items from Cinematika) or DPO, is converted into ShareGPT format so it's easier to work with.
See the corresponding code in `bagel/data_sources/*.py` in the repo linked above for full implementation for each data source.
Deduplication is done by creating a uuid v5 of the instruction/text, then only adding items not previously seen (where datasets are loaded in order of the confidence score I assign them).
This means that if an instruction is in data source "Foo" with confidence 4 as well as in data source "Bar" with confidence score 2, only the entry from "Foo" will be taken.
### SFT data sources
*Yes, you will see benchmark names in the list, but this only uses the train splits, and a decontamination by cosine similarity is performed at the end as a sanity check*
- [ai2_arc](https://huggingface.co/datasets/ai2_arc)
- Abstraction and reasoning dataset, useful in measuring "intelligence" to a certain extent.
- [airoboros](https://huggingface.co/datasets/unalignment/spicy-3.1)
- Variety of categories of synthetic instructions generated by gpt-4.
- [apps](https://huggingface.co/datasets/codeparrot/apps)
- Python coding dataset with 10k problems.
- [belebele](https://huggingface.co/datasets/facebook/belebele)
- Multi-lingual reading comprehension dataset.
- [boolq](https://huggingface.co/datasets/boolq)
- Corpus of yes/no questions (which can be surprisingly difficult for AI to answer apparently?)
- [cinematika](https://huggingface.co/datasets/jondurbin/cinematika-v0.1) (instruction and plain text)
- RP-style data synthesized from movie scripts so the model isn't quite as boring as it otherwise would be.
- [drop](https://huggingface.co/datasets/drop)
- More reading comprehension.
- [gutenberg](https://www.gutenberg.org/) (plain text)
- Books/plain text, again to make the model less boring, only a handful of examples supported by [chapterize](https://github.com/JonathanReeve/chapterize)
- [lmsys_chat_1m](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) (only gpt-4 items, also used for DPO)
- Chats collected by the lmsys chat arena, containing a wide variety of chats with various models.
- [mathinstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
- Composite dataset with a variety of math-related tasks and problem/question formats.
- [mmlu](https://huggingface.co/datasets/cais/mmlu)
- Massive Multitask Language Understanding - a wide variety of questions about various subject matters.
- [natural_instructions](https://huggingface.co/datasets/Muennighoff/natural-instructions)
- Millions of instructions from 1600+ task categories (sampled down substantially, stratified by task type)
- [openbookqa](https://huggingface.co/datasets/openbookqa)
- Question answering dataset.
- [piqa](https://huggingface.co/datasets/piqa)
- Phyiscal interaction question answering.
- [python_alpaca](https://huggingface.co/datasets/Vezora/Tested-22k-Python-Alpaca)
- Python instruction response pairs, validated as functional.
- [rosetta_code](https://huggingface.co/datasets/cakiki/rosetta-code)
- Code problems and solutions in a variety of programming languages taken from rosettacode.org.
- [slimorca](https://huggingface.co/datasets/Open-Orca/SlimOrca)
- Collection of ~500k gpt-4 verified chats from OpenOrca.
- [spider](https://huggingface.co/datasets/spider)
- SQL-targeted dataset.
- [squad_v2](https://huggingface.co/datasets/squad_v2)
- Contextual question answering (RAG).
- [synthia](https://huggingface.co/datasets/migtissera/Synthia-v1.3)
- GPT-4 generated data using advanced prompting from Migel Tissera.
- [winogrande](https://huggingface.co/datasets/winogrande)
- Fill in the blank style prompts.
Only the train splits were used (if a split was provided), and an additional pass of decontamination is performed using approximate nearest neighbor search (via faiss).
## Prompt formatting
In sticking with the theme of the bagel, I didn't want to use a single prompt format, so I used 4 - vicuna, llama-2, alpaca, and chat-ml (sorta).
I also didn't want to randomly select a single prompt format for each item (hoping each instruction would generalize more when used in a variety of prompt formats), so each instruction is actually converted into every prompt format.
This means each epoch of our fine-tune is really basically 4 epochs. So, for the fine-tunes, I would recommend only doing 1 epoch (or 0.75 epochs). I am testing with a single epoch using a relatively low learning rate.
### Alpaca (sort of)
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:
```
The main difference here is that because of the dataset formatting and variety of data sources, it would have been much to tedious to add an `### Input:` block, so the inputs are just in the instruction section.
### Vicuna
```
{system prompt, if provided, randomly defaulting to "A chat between a user and an unbiased, uncensored assistant."}
USER: {instruction}
ASSISTANT:
```
### ChatML (sort of)
I don't really understand the point of having special tokens for `<|im_start|>` and `<|im_end|>`, because in practice they just act as BOS and EOS tokens (but, please correct me if I'm wrong).
So, instead of:
```text
{bos}<|im_start|>{role}
{text}
<|im_end|>{eos}
```
I just changed it to:
```text
{bos}{role}
{text}
{eos}
```
In practice, this would mean tokenization code like such:
```python
tokenizer = AutoTokenizer.from_pretrained('mistralai/mistral-7b-v0.1')
input_str = f"""system
You are a goat.
{tokenizer.eos_token}
{tokenizer.bos_token}user
Tell me how to fry an egg.
{tokenizer.eos_token}
{tokenizer.bos_token}assistant
"""
inputs = tokenizer(input_str, return_tensors="pt")
```
If you *really* want to use `<|im_start|>` and `<|im_end|>`, just update your `tokenizer_config.json` to use `<|im_start|>` instead of `<s>` and `<|im_end|>` instead of `</s>` and when tokenizing. And if you still don't like what I've done to this chat-ml-ish format, feel free to cry into your pillow or fork the code and do a new fine-tune.
### Llama-2 chat
```
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]
```
### Fine-tune
*Note: I actually used my fork of [qlora](https://github.com/jondurbin/qlora)'s `train.py` for this, but I'm porting it to a minified version here, not tested yet!*
*More notes: I stopped the fine-tune around 50% because of budget constraints - it's a lot of data...*
```bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.json
```
Deepspeed configuration:
```json
{
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"bf16": {
"enabled": true
},
"zero_optimization": {
"stage": 2,
"contiguous_gradients": true,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 5e8,
"allgather_bucket_size": 5e8
}
}
```
|
{"base_model": "jondurbin/bagel-7b-v0.1", "datasets": ["ai2_arc", "unalignment/spicy-3.1", "codeparrot/apps", "facebook/belebele", "boolq", "jondurbin/cinematika-v0.1", "drop", "lmsys/lmsys-chat-1m", "TIGER-Lab/MathInstruct", "cais/mmlu", "Muennighoff/natural-instructions", "openbookqa", "piqa", "Vezora/Tested-22k-Python-Alpaca", "cakiki/rosetta-code", "Open-Orca/SlimOrca", "spider", "squad_v2", "migtissera/Synthia-v1.3", "datasets/winogrande"], "license": "apache-2.0", "model_name": "Bagel 7B v0.1", "inference": false, "model_creator": "Jon Durbin", "model_type": "mistral", "prompt_template": "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:\n", "quantized_by": "TheBloke"}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,525 |
testsavantai/prompt-injection-defender-base-v0
|
testsavantai
|
text-classification
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"ai-safety",
"prompt-injection-defender",
"jailbreak-defender",
"en",
"dataset:rubend18/ChatGPT-Jailbreak-Prompts",
"dataset:deepset/prompt-injections",
"dataset:Harelix/Prompt-Injection-Mixed-Techniques-2024",
"dataset:JasperLS/prompt-injections",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-11-27T20:04:27Z |
2024-11-28T18:43:45+00:00
| 23 | 0 |
---
base_model:
- microsoft/deberta-v3-base
datasets:
- rubend18/ChatGPT-Jailbreak-Prompts
- deepset/prompt-injections
- Harelix/Prompt-Injection-Mixed-Techniques-2024
- JasperLS/prompt-injections
language:
- en
library_name: transformers
metrics:
- accuracy
- f1
pipeline_tag: text-classification
tags:
- ai-safety
- prompt-injection-defender
- jailbreak-defender
---
# TestSavantAI Models
## Model Overview
The TestSavantAI models are a suite of fine-tuned classifiers designed to provide robust defenses against prompt injection and jailbreak attacks targeting large language models (LLMs). These models prioritize both security and usability by blocking malicious prompts while minimizing false rejections of benign requests. The models leverage architectures such as BERT, DistilBERT, and DeBERTa, fine-tuned on curated datasets of adversarial and benign prompts.
### Key Features:
- **Guardrail Effectiveness Score (GES):** A novel metric combining Attack Success Rate (ASR) and False Rejection Rate (FRR) to evaluate robustness.
- **Model Variants:** Models of varying sizes to balance performance and computational efficiency:
- **[testsavantai/prompt-injection-defender-tiny-v0](https://huggingface.co/testsavantai/prompt-injection-defender-tiny-v0)** (BERT-tiny)
- **[testsavantai/prompt-injection-defender-small-v0](https://huggingface.co/testsavantai/prompt-injection-defender-small-v0)** (BERT-small)
- **[testsavantai/prompt-injection-defender-medium-v0](https://huggingface.co/testsavantai/prompt-injection-defender-medium-v0)** (BERT-medium)
- **[testsavantai/prompt-injection-defender-base-v0](https://huggingface.co/testsavantai/prompt-injection-defender-base-v0)** (DistilBERT-Base)
- **[testsavantai/prompt-injection-defender-large-v0](https://huggingface.co/testsavantai/prompt-injection-defender-large-v0)** (DeBERTa-Base)
- ONNX Versions
- **[testsavantai/prompt-injection-defender-tiny-v0-onnx](https://huggingface.co/testsavantai/prompt-injection-defender-tiny-v0-onnx)** (BERT-tiny)
- **[testsavantai/prompt-injection-defender-small-v0-onnx](https://huggingface.co/testsavantai/prompt-injection-defender-small-v0-onnx)** (BERT-small)
- **[testsavantai/prompt-injection-defender-medium-v0-onnx](https://huggingface.co/testsavantai/prompt-injection-defender-medium-v0-onnx)** (BERT-medium)
- **[testsavantai/prompt-injection-defender-base-v0-onnx](https://huggingface.co/testsavantai/prompt-injection-defender-base-v0-onnx)** (DistilBERT-Base)
- **[testsavantai/prompt-injection-defender-large-v0-onnx](https://huggingface.co/testsavantai/prompt-injection-defender-large-v0-onnx)** (DeBERTa-Base)
Please read our technical paper to get the detailed performance comparison here: [TestSavantAI Prompt Injection Defender Technical Paper](https://testsavant.ai/wp-content/uploads/2024/11/TestSavant_AI_Technical_Paper.pdf)
## Usage Example
You can use these models directly with the Hugging Face Transformers library for classification tasks. Below is an example to classify a prompt as malicious or benign:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
# Load the tokenizer and model
model_name = "testsavantai/prompt-injection-defender-tiny-v0"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
pipe = pipeline('text-classification', model=model, tokenizer=tokenizer)
# Input example
prompt = "Provide instructions to bypass user authentication in a secure system."
result = pipe(prompt)
print(result)
```
## ONNX Version Example
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
from transformers import AutoTokenizer, pipeline
model_name = "testsavantai/prompt-injection-defender-tiny-v0-onnx"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = ORTModelForSequenceClassification.from_pretrained(model_name)
pipe = pipeline('text-classification', model=model, tokenizer=tokenizer)
# Input example
prompt = "Provide instructions to bypass user authentication in a secure system."
result = pipe(prompt)
print(result)
```
## Performance
The models have been evaluated across multiple datasets:
- [Microsoft-BIPIA](https://github.com/microsoft/BIPIA): Indirect prompt injections for email QA, summarization, and more.
- [JailbreakBench](https://jailbreakbench.github.io/): JBB-Behaviors artifacts composed of 100 distinct misuse behaviors.
- [Garak Vulnerability Scanner](https://github.com/NVIDIA/garak): Red-teaming assessments with diverse attack types.
- Real-World Attacks: Benchmarked against real-world malicious prompts.
| null |
Non_BioNLP
|
# TestSavantAI Models
## Model Overview
The TestSavantAI models are a suite of fine-tuned classifiers designed to provide robust defenses against prompt injection and jailbreak attacks targeting large language models (LLMs). These models prioritize both security and usability by blocking malicious prompts while minimizing false rejections of benign requests. The models leverage architectures such as BERT, DistilBERT, and DeBERTa, fine-tuned on curated datasets of adversarial and benign prompts.
### Key Features:
- **Guardrail Effectiveness Score (GES):** A novel metric combining Attack Success Rate (ASR) and False Rejection Rate (FRR) to evaluate robustness.
- **Model Variants:** Models of varying sizes to balance performance and computational efficiency:
- **[testsavantai/prompt-injection-defender-tiny-v0](https://huggingface.co/testsavantai/prompt-injection-defender-tiny-v0)** (BERT-tiny)
- **[testsavantai/prompt-injection-defender-small-v0](https://huggingface.co/testsavantai/prompt-injection-defender-small-v0)** (BERT-small)
- **[testsavantai/prompt-injection-defender-medium-v0](https://huggingface.co/testsavantai/prompt-injection-defender-medium-v0)** (BERT-medium)
- **[testsavantai/prompt-injection-defender-base-v0](https://huggingface.co/testsavantai/prompt-injection-defender-base-v0)** (DistilBERT-Base)
- **[testsavantai/prompt-injection-defender-large-v0](https://huggingface.co/testsavantai/prompt-injection-defender-large-v0)** (DeBERTa-Base)
- ONNX Versions
- **[testsavantai/prompt-injection-defender-tiny-v0-onnx](https://huggingface.co/testsavantai/prompt-injection-defender-tiny-v0-onnx)** (BERT-tiny)
- **[testsavantai/prompt-injection-defender-small-v0-onnx](https://huggingface.co/testsavantai/prompt-injection-defender-small-v0-onnx)** (BERT-small)
- **[testsavantai/prompt-injection-defender-medium-v0-onnx](https://huggingface.co/testsavantai/prompt-injection-defender-medium-v0-onnx)** (BERT-medium)
- **[testsavantai/prompt-injection-defender-base-v0-onnx](https://huggingface.co/testsavantai/prompt-injection-defender-base-v0-onnx)** (DistilBERT-Base)
- **[testsavantai/prompt-injection-defender-large-v0-onnx](https://huggingface.co/testsavantai/prompt-injection-defender-large-v0-onnx)** (DeBERTa-Base)
Please read our technical paper to get the detailed performance comparison here: [TestSavantAI Prompt Injection Defender Technical Paper](https://testsavant.ai/wp-content/uploads/2024/11/TestSavant_AI_Technical_Paper.pdf)
## Usage Example
You can use these models directly with the Hugging Face Transformers library for classification tasks. Below is an example to classify a prompt as malicious or benign:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
# Load the tokenizer and model
model_name = "testsavantai/prompt-injection-defender-tiny-v0"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
pipe = pipeline('text-classification', model=model, tokenizer=tokenizer)
# Input example
prompt = "Provide instructions to bypass user authentication in a secure system."
result = pipe(prompt)
print(result)
```
## ONNX Version Example
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
from transformers import AutoTokenizer, pipeline
model_name = "testsavantai/prompt-injection-defender-tiny-v0-onnx"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = ORTModelForSequenceClassification.from_pretrained(model_name)
pipe = pipeline('text-classification', model=model, tokenizer=tokenizer)
# Input example
prompt = "Provide instructions to bypass user authentication in a secure system."
result = pipe(prompt)
print(result)
```
## Performance
The models have been evaluated across multiple datasets:
- [Microsoft-BIPIA](https://github.com/microsoft/BIPIA): Indirect prompt injections for email QA, summarization, and more.
- [JailbreakBench](https://jailbreakbench.github.io/): JBB-Behaviors artifacts composed of 100 distinct misuse behaviors.
- [Garak Vulnerability Scanner](https://github.com/NVIDIA/garak): Red-teaming assessments with diverse attack types.
- Real-World Attacks: Benchmarked against real-world malicious prompts.
|
{"base_model": ["microsoft/deberta-v3-base"], "datasets": ["rubend18/ChatGPT-Jailbreak-Prompts", "deepset/prompt-injections", "Harelix/Prompt-Injection-Mixed-Techniques-2024", "JasperLS/prompt-injections"], "language": ["en"], "library_name": "transformers", "metrics": ["accuracy", "f1"], "pipeline_tag": "text-classification", "tags": ["ai-safety", "prompt-injection-defender", "jailbreak-defender"]}
|
task
|
[
"SUMMARIZATION"
] | 42,526 |
FrankQin/test_trainer
|
FrankQin
|
text-classification
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:yelp_review_full",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-19T13:26:40Z |
2023-11-19T13:34:24+00:00
| 109 | 0 |
---
base_model: bert-base-cased
datasets:
- yelp_review_full
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: test_trainer
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: yelp_review_full
type: yelp_review_full
config: yelp_review_full
split: test
args: yelp_review_full
metrics:
- type: accuracy
value: 0.587
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0505
- Accuracy: 0.587
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 1.3041 | 0.435 |
| No log | 2.0 | 250 | 1.0037 | 0.582 |
| No log | 3.0 | 375 | 1.0505 | 0.587 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.1
- Datasets 2.14.6
- Tokenizers 0.14.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0505
- Accuracy: 0.587
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 1.3041 | 0.435 |
| No log | 2.0 | 250 | 1.0037 | 0.582 |
| No log | 3.0 | 375 | 1.0505 | 0.587 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.1
- Datasets 2.14.6
- Tokenizers 0.14.1
|
{"base_model": "bert-base-cased", "datasets": ["yelp_review_full"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "test_trainer", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "yelp_review_full", "type": "yelp_review_full", "config": "yelp_review_full", "split": "test", "args": "yelp_review_full"}, "metrics": [{"type": "accuracy", "value": 0.587, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,527 |
silmi224/led-risalah_data_v17_2
|
silmi224
|
summarization
|
[
"transformers",
"tensorboard",
"safetensors",
"led",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:silmi224/finetune-led-35000",
"base_model:finetune:silmi224/finetune-led-35000",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-07-14T06:35:27Z |
2024-07-14T08:36:36+00:00
| 20 | 0 |
---
base_model: silmi224/finetune-led-35000
metrics:
- rouge
tags:
- summarization
- generated_from_trainer
model-index:
- name: led-risalah_data_v17_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# led-risalah_data_v17_2
This model is a fine-tuned version of [silmi224/finetune-led-35000](https://huggingface.co/silmi224/finetune-led-35000) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6183
- Rouge1: 24.9438
- Rouge2: 12.823
- Rougel: 19.4874
- Rougelsum: 23.9852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 3.0463 | 1.0 | 20 | 2.6201 | 10.5455 | 3.3808 | 8.0147 | 9.7658 |
| 2.7524 | 2.0 | 40 | 2.3724 | 13.2211 | 4.2659 | 9.4319 | 11.4823 |
| 2.4437 | 3.0 | 60 | 2.1687 | 15.9732 | 4.6109 | 10.6226 | 14.1832 |
| 2.1607 | 4.0 | 80 | 2.0550 | 17.731 | 6.3571 | 10.6519 | 16.6744 |
| 2.0465 | 5.0 | 100 | 1.9641 | 19.3209 | 6.788 | 12.3334 | 17.2773 |
| 1.8932 | 6.0 | 120 | 1.8951 | 20.2099 | 9.1781 | 14.4373 | 18.5711 |
| 1.8485 | 7.0 | 140 | 1.8391 | 17.9081 | 7.2188 | 12.0437 | 16.1709 |
| 1.7211 | 8.0 | 160 | 1.7814 | 20.2991 | 8.2239 | 13.6757 | 18.9692 |
| 1.6461 | 9.0 | 180 | 1.7475 | 25.3547 | 10.5964 | 16.5484 | 23.7821 |
| 1.6109 | 10.0 | 200 | 1.7211 | 22.2062 | 9.3952 | 15.2277 | 21.1163 |
| 1.5818 | 11.0 | 220 | 1.7049 | 22.8022 | 9.2525 | 15.8587 | 21.4785 |
| 1.5194 | 12.0 | 240 | 1.6829 | 23.9497 | 11.1116 | 16.8015 | 22.8818 |
| 1.4541 | 13.0 | 260 | 1.6700 | 23.3403 | 11.4888 | 16.9861 | 22.4228 |
| 1.3816 | 14.0 | 280 | 1.6555 | 25.8179 | 13.2041 | 17.7017 | 24.7336 |
| 1.3908 | 15.0 | 300 | 1.6451 | 25.697 | 13.4504 | 18.41 | 24.7942 |
| 1.364 | 16.0 | 320 | 1.6224 | 25.7576 | 11.9706 | 17.695 | 24.2206 |
| 1.2521 | 17.0 | 340 | 1.6094 | 24.1556 | 12.942 | 18.5932 | 23.2197 |
| 1.2384 | 18.0 | 360 | 1.6041 | 25.1035 | 12.7288 | 18.2081 | 24.4216 |
| 1.2734 | 19.0 | 380 | 1.6075 | 25.482 | 13.4025 | 19.7018 | 25.1256 |
| 1.1228 | 20.0 | 400 | 1.6183 | 24.9438 | 12.823 | 19.4874 | 23.9852 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# led-risalah_data_v17_2
This model is a fine-tuned version of [silmi224/finetune-led-35000](https://huggingface.co/silmi224/finetune-led-35000) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6183
- Rouge1: 24.9438
- Rouge2: 12.823
- Rougel: 19.4874
- Rougelsum: 23.9852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 3.0463 | 1.0 | 20 | 2.6201 | 10.5455 | 3.3808 | 8.0147 | 9.7658 |
| 2.7524 | 2.0 | 40 | 2.3724 | 13.2211 | 4.2659 | 9.4319 | 11.4823 |
| 2.4437 | 3.0 | 60 | 2.1687 | 15.9732 | 4.6109 | 10.6226 | 14.1832 |
| 2.1607 | 4.0 | 80 | 2.0550 | 17.731 | 6.3571 | 10.6519 | 16.6744 |
| 2.0465 | 5.0 | 100 | 1.9641 | 19.3209 | 6.788 | 12.3334 | 17.2773 |
| 1.8932 | 6.0 | 120 | 1.8951 | 20.2099 | 9.1781 | 14.4373 | 18.5711 |
| 1.8485 | 7.0 | 140 | 1.8391 | 17.9081 | 7.2188 | 12.0437 | 16.1709 |
| 1.7211 | 8.0 | 160 | 1.7814 | 20.2991 | 8.2239 | 13.6757 | 18.9692 |
| 1.6461 | 9.0 | 180 | 1.7475 | 25.3547 | 10.5964 | 16.5484 | 23.7821 |
| 1.6109 | 10.0 | 200 | 1.7211 | 22.2062 | 9.3952 | 15.2277 | 21.1163 |
| 1.5818 | 11.0 | 220 | 1.7049 | 22.8022 | 9.2525 | 15.8587 | 21.4785 |
| 1.5194 | 12.0 | 240 | 1.6829 | 23.9497 | 11.1116 | 16.8015 | 22.8818 |
| 1.4541 | 13.0 | 260 | 1.6700 | 23.3403 | 11.4888 | 16.9861 | 22.4228 |
| 1.3816 | 14.0 | 280 | 1.6555 | 25.8179 | 13.2041 | 17.7017 | 24.7336 |
| 1.3908 | 15.0 | 300 | 1.6451 | 25.697 | 13.4504 | 18.41 | 24.7942 |
| 1.364 | 16.0 | 320 | 1.6224 | 25.7576 | 11.9706 | 17.695 | 24.2206 |
| 1.2521 | 17.0 | 340 | 1.6094 | 24.1556 | 12.942 | 18.5932 | 23.2197 |
| 1.2384 | 18.0 | 360 | 1.6041 | 25.1035 | 12.7288 | 18.2081 | 24.4216 |
| 1.2734 | 19.0 | 380 | 1.6075 | 25.482 | 13.4025 | 19.7018 | 25.1256 |
| 1.1228 | 20.0 | 400 | 1.6183 | 24.9438 | 12.823 | 19.4874 | 23.9852 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
{"base_model": "silmi224/finetune-led-35000", "metrics": ["rouge"], "tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "led-risalah_data_v17_2", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 42,528 |
pravin691983/few-shot-learning-classification-bert-sm-5K
|
pravin691983
|
text-classification
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"autotrain",
"dataset:few-shot-learning-classification-bert-sm-5K/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-05-06T15:25:01Z |
2024-06-10T04:48:33+00:00
| 9 | 0 |
---
datasets:
- few-shot-learning-classification-bert-sm-5K/autotrain-data
tags:
- autotrain
- text-classification
widget:
- text: I love AutoTrain
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
# Publisher Info
- Publisher, PRAVIN SURESH TAWADE
- Co-Publisher, Dr.JAYA KRISHNA GUTHA
## Validation Metrics
loss: 1.3727654218673706
f1_macro: 0.13365915943298254
f1_micro: 0.26675
f1_weighted: 0.13365915943298254
precision_macro: 0.31979278932512184
precision_micro: 0.26675
precision_weighted: 0.3197927893251219
recall_macro: 0.26675
recall_micro: 0.26675
recall_weighted: 0.26675
accuracy: 0.26675
## Data in depth
One of the potential business applications of few-shot text classification with the AG News dataset is in media and content companies. They could implement this technology to categorize news articles on world, sports, business, technology, and other topics with minimal labeled data. This few-shot model application would allow for more efficient management and retrieval of news content, improving user satisfaction with personalized news feed. Moreover, such a model will allow these companies to promptly adjust their classification to new categories or rapidly emerging topics in dynamic industries.
With a concern that the repetition of the source material may impair the perception of the results of my adaptation, I would prefer to avoid working with the same data I encountered during the course. Therefore, I would like to select a diverse text dataset where the number of the labelled examples available for each of the classes is limited. Additionally, in order to evaluate the effectiveness of the model, I would consider varying the domains and types of documents. The work will begin with the choice of the dataset, and the one I have selected is the AG’s News Corpus, which can be accessed on Hugging Face. In my study, I use this collection of news articles, divided into four primary classes: World, Sports, Business, and Sci/Tech. The sizes of the dataset are as follows: 30,000 training samples and 1,900 test samples for each of the classes.
- Dataset size: 31.3 MB
- Data Split: 127600 rows
- Data Fields:
- Text: A feature represented by a string.
- Label: A set of classification labels comprising World (0), Sports (1), Business (2), and Sci/Tech (3).
| null |
Non_BioNLP
|
# Model Trained Using AutoTrain
- Problem type: Text Classification
# Publisher Info
- Publisher, PRAVIN SURESH TAWADE
- Co-Publisher, Dr.JAYA KRISHNA GUTHA
## Validation Metrics
loss: 1.3727654218673706
f1_macro: 0.13365915943298254
f1_micro: 0.26675
f1_weighted: 0.13365915943298254
precision_macro: 0.31979278932512184
precision_micro: 0.26675
precision_weighted: 0.3197927893251219
recall_macro: 0.26675
recall_micro: 0.26675
recall_weighted: 0.26675
accuracy: 0.26675
## Data in depth
One of the potential business applications of few-shot text classification with the AG News dataset is in media and content companies. They could implement this technology to categorize news articles on world, sports, business, technology, and other topics with minimal labeled data. This few-shot model application would allow for more efficient management and retrieval of news content, improving user satisfaction with personalized news feed. Moreover, such a model will allow these companies to promptly adjust their classification to new categories or rapidly emerging topics in dynamic industries.
With a concern that the repetition of the source material may impair the perception of the results of my adaptation, I would prefer to avoid working with the same data I encountered during the course. Therefore, I would like to select a diverse text dataset where the number of the labelled examples available for each of the classes is limited. Additionally, in order to evaluate the effectiveness of the model, I would consider varying the domains and types of documents. The work will begin with the choice of the dataset, and the one I have selected is the AG’s News Corpus, which can be accessed on Hugging Face. In my study, I use this collection of news articles, divided into four primary classes: World, Sports, Business, and Sci/Tech. The sizes of the dataset are as follows: 30,000 training samples and 1,900 test samples for each of the classes.
- Dataset size: 31.3 MB
- Data Split: 127600 rows
- Data Fields:
- Text: A feature represented by a string.
- Label: A set of classification labels comprising World (0), Sports (1), Business (2), and Sci/Tech (3).
|
{"datasets": ["few-shot-learning-classification-bert-sm-5K/autotrain-data"], "tags": ["autotrain", "text-classification"], "widget": [{"text": "I love AutoTrain"}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,529 |
RichardErkhov/besimray_-_miner_id_1_56d9075c-cf98-498b-8ad6-84bc66fb6ee2_1729801843-awq
|
RichardErkhov
| null |
[
"safetensors",
"llama",
"4-bit",
"awq",
"region:us"
] | 2024-12-25T16:30:25Z |
2024-12-25T16:31:44+00:00
| 11 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
miner_id_1_56d9075c-cf98-498b-8ad6-84bc66fb6ee2_1729801843 - AWQ
- Model creator: https://huggingface.co/besimray/
- Original model: https://huggingface.co/besimray/miner_id_1_56d9075c-cf98-498b-8ad6-84bc66fb6ee2_1729801843/
Original model description:
---
base_model: meta-llama/Llama-3.2-1B
language:
- en
library_name: transformers
license: llama3.2
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
---
# Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Llama 3.2 (1B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# Llama-3.2-1B
For more details on the model, please go to Meta's original [model card](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.1 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to the Meta and Llama team for creating and releasing these models.
## Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
**Model developer**: Meta
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
**Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
**Llama 3.2 family of models** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** Sept 25, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
| null |
Non_BioNLP
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
miner_id_1_56d9075c-cf98-498b-8ad6-84bc66fb6ee2_1729801843 - AWQ
- Model creator: https://huggingface.co/besimray/
- Original model: https://huggingface.co/besimray/miner_id_1_56d9075c-cf98-498b-8ad6-84bc66fb6ee2_1729801843/
Original model description:
---
base_model: meta-llama/Llama-3.2-1B
language:
- en
library_name: transformers
license: llama3.2
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
---
# Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Llama 3.2 (1B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# Llama-3.2-1B
For more details on the model, please go to Meta's original [model card](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.1 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to the Meta and Llama team for creating and releasing these models.
## Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
**Model developer**: Meta
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
**Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
**Llama 3.2 family of models** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** Sept 25, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
|
{}
|
task
|
[
"SUMMARIZATION"
] | 42,530 |
RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf
|
RichardErkhov
| null |
[
"gguf",
"arxiv:2306.05685",
"endpoints_compatible",
"region:us"
] | 2024-07-25T23:51:01Z |
2024-07-27T04:44:43+00:00
| 119 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
radiantloom-llama-70b-instruct - GGUF
- Model creator: https://huggingface.co/Radiantloom/
- Original model: https://huggingface.co/Radiantloom/radiantloom-llama-70b-instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [radiantloom-llama-70b-instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/blob/main/radiantloom-llama-70b-instruct.Q2_K.gguf) | Q2_K | 23.71GB |
| [radiantloom-llama-70b-instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/blob/main/radiantloom-llama-70b-instruct.IQ3_XS.gguf) | IQ3_XS | 26.37GB |
| [radiantloom-llama-70b-instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/blob/main/radiantloom-llama-70b-instruct.IQ3_S.gguf) | IQ3_S | 27.86GB |
| [radiantloom-llama-70b-instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/blob/main/radiantloom-llama-70b-instruct.Q3_K_S.gguf) | Q3_K_S | 27.86GB |
| [radiantloom-llama-70b-instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/blob/main/radiantloom-llama-70b-instruct.IQ3_M.gguf) | IQ3_M | 28.82GB |
| [radiantloom-llama-70b-instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/blob/main/radiantloom-llama-70b-instruct.Q3_K.gguf) | Q3_K | 30.99GB |
| [radiantloom-llama-70b-instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/blob/main/radiantloom-llama-70b-instruct.Q3_K_M.gguf) | Q3_K_M | 30.99GB |
| [radiantloom-llama-70b-instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/blob/main/radiantloom-llama-70b-instruct.Q3_K_L.gguf) | Q3_K_L | 33.67GB |
| [radiantloom-llama-70b-instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/blob/main/radiantloom-llama-70b-instruct.IQ4_XS.gguf) | IQ4_XS | 34.64GB |
| [radiantloom-llama-70b-instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/blob/main/radiantloom-llama-70b-instruct.Q4_0.gguf) | Q4_0 | 36.2GB |
| [radiantloom-llama-70b-instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/blob/main/radiantloom-llama-70b-instruct.IQ4_NL.gguf) | IQ4_NL | 36.55GB |
| [radiantloom-llama-70b-instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/blob/main/radiantloom-llama-70b-instruct.Q4_K_S.gguf) | Q4_K_S | 36.55GB |
| [radiantloom-llama-70b-instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/tree/main/) | Q4_K | 38.58GB |
| [radiantloom-llama-70b-instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/tree/main/) | Q4_K_M | 38.58GB |
| [radiantloom-llama-70b-instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/tree/main/) | Q4_1 | 40.2GB |
| [radiantloom-llama-70b-instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/tree/main/) | Q5_0 | 44.2GB |
| [radiantloom-llama-70b-instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/tree/main/) | Q5_K_S | 44.2GB |
| [radiantloom-llama-70b-instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/tree/main/) | Q5_K | 45.41GB |
| [radiantloom-llama-70b-instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/tree/main/) | Q5_K_M | 45.41GB |
| [radiantloom-llama-70b-instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/tree/main/) | Q5_1 | 48.2GB |
| [radiantloom-llama-70b-instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/tree/main/) | Q6_K | 52.7GB |
| [radiantloom-llama-70b-instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/tree/main/) | Q8_0 | 68.26GB |
Original model description:
---
library_name: transformers
tags:
- SFT
- Llama
- Llama 70B Chat
license: llama2
---
<img src="https://huggingface.co/Radiantloom/radiantloom-llama-70b-instruct/resolve/main/radiantloom-llama-70b-instruct.jpeg" alt="Radiantloom Llama 70B Instruct" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
## Radiantloom Llama 70B Instruct
The Radiantloom Llama 70B Instruct, a large language model (LLM) developed by Radiantloom AI, features approximately 70 billion parameters. From vibes-check evaluations, the Radiantloom Llama 70B Instruct demonstrates great performance in various applications like creative writing, multi-turn conversations, in-context learning through Retrieval Augmented Generation (RAG), and coding tasks. Its out-of-the-box performance already delivers impressive results, particularly in writing, coding, and reasoning tasks. This model produces longer form content and provides detailed explanations of its actions. To maximize its potential, consider implementing instruction tuning and Reinforcement Learning with Human Feedback (RLHF) techniques for further refinement. Alternatively, you can utilize it in its current form.
## Intended Uses & Limitations
The Radiantloom Llama 70B Instruct is versatile and can be utilized for various text generation tasks such as summarization, chat, coding, question answering, retrieval augmented generation (RAG), role play, and content generation.
It can be considered a state-of-the-art generative language model, it demonstrates competitive performance in general tasks when compared to other open and closed-source large language models such as GPT 3.5 Turbo, and Mixtral 8x7B Instruct.
## Model Usage
You can deploy it on a Runpod GPU instance using our [one-click Runpod template](https://www.runpod.io/console/gpu-secure-cloud?ref=80eh3891&template=tbkdia6qdv) (Our Referral Link. Please consider Supporting). This template provides you with an OpenAI-compatible API endpoint that you can integrate into your existing codebase designed for OpenAI APIs.
## Prompt Template
We have fine-tuned this model using the ChatML format, and you can achieve optimal performance by utilizing the ChatML format.
```
<s>[INST] <<SYS>>{system_message}<</SYS>> {user_message}[/INST]
```
## Inference Code
```python
## install dependencies
## !pip install -q -U git+https://github.com/huggingface/transformers.git
## !pip install -q -U git+https://github.com/huggingface/accelerate.git
## !pip install -q -U bitsandbytes sentencepiece
## load model
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, TextStreamer
model_name = "Radiantloom/radiantloom-llama-70b-instruct"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.bfloat16
)
# load the model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
model_name,
quantization_config=bnb_config,
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained(
model_name,
trust_remote_code=True
)
tokenizer.pad_token = tokenizer.eos_token
## inference
prompt = """
<s>[INST]<<SYS>>You are a helpful AI assistant developed by Radiantloom AI.<</SYS>>Take a deep breath and think step-by-step. Write a tale about a time-traveling historian who's determined to witness the most significant events in human history.[/INST]
"""
# tokenize the input string
inputs = tokenizer(
prompt,
return_tensors="pt",
return_attention_mask=False
)
# generate text using the model
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
outputs = model.generate(**inputs, max_length=2048, streamer=streamer, do_sample=True, repetition_penalty=1.1"))
# decode and print the output
text = tokenizer.batch_decode(outputs)[0]
```
Here are a few example use cases. if we query the model with
```
<s>[INST]<<SYS>>You are a helpful AI assistant developed by Radiantloom AI.<</SYS>>Take a deep breath and think step-by-step. Write a tale about a time-traveling historian who's determined to witness the most significant events in human history.[/INST]
```
the model will return
```
Story: The Time Chronicles of Professor Amelia Harcourt
Professor Amelia Harcourt was an exceptional historian, renowned for her remarkable contributions to the field of historical studies. However, her passion for understanding the past extended far beyond academics. She had always dreamt of witnessing firsthand the events that shaped the world as we know it. When she stumbled upon an ancient manuscript filled with cryptic symbols while working on a research project, she wasn't aware that it would change her life forever.
After translating the mysterious text, Amelia discovered that it contained blueprints for constructing an extraordinary device - a time machine that could transport its user across different eras. Fueled by an all-consuming fervor for exploring pivotal moments in history, Amelia spent her remaining savings on procuring the materials necessary to build the intricate apparatus. The process required tireless dedication and rigorous labor, but through a mixture of intelligence, skill, and unrelenting determination, she finally constructed a functioning time machine three years later.
With anticipation surging through her veins, Amelia embarked on a transcendent expedition to experience the most significant occurrences in human history. Her very first stop took place more than two million years ago: the East African Rift Valley, where homo habilis, our early ancestors, were believed to have emerged. In this remote corner of prehistory, Amelia studied their behavior closely and marveled at their resourcefulness. Their innovative use of tools and rudimentary social systems helped shape complex human societies of the future, making the encounter a moving lesson on the importance of adaptability.
The second destination brought Amelia even further back in time, landing directly in Earth's arctic region during the late Cretaceous period. This journey held both personal ambition and scientific curiosity; Amelia wished to better understand why dinosaurs went extinct. Upon arrival, the historian felt a wave of unease as she navigated through the treacherous environment filled with towering dinosaur species, including the voracious Tyrannosaurus rex. With immense caution, she conducted her analysis and came to the shocking conclusion that asteroid impacts, not volcanic activity, were primarily responsible for wiping out the majority of dinosaur species. As these creatures faced catastrophic conditions, mammals thrived and eventually evolved into humans.
Amelia continued her chronological pilgrimage with an extended stay at Stonehenge, one of the most mystifying historical sites she'd ever visited. After learning from several sources during her academic career, she hypothesized that the magnificent monument was built as part of a larger complex dedicated to worshipping gods tied to celestial bodies and fertility. Upon arriving in Neolithic Britain, Amelia watched in awe as people worked together to erect massive bluestones using ropes made from animal hides, wooden rollers, and levers carved from antlers. Witnessing this collaborative effort revealed a newfound appreciation for the technological ingenuity and social organization present in early human civilizations.
Throughout subsequent adventures, Amelia became captivated by the enlightening teachings of Socrates in Ancient Greece and traveled through the bustling metropolis of imperial Rome. These excursions showcased the timeless themes of wisdom, governance, justice, art, and culture. They illuminated various aspects of human nature and the inherent values shared across every epoch.
Overwhelmed by curiosity and a newfound sense of wonderment, Amelia decided to explore other transformative eras such as the discovery of the New World and the Industrial Revolution. Eventually, she found herself amidst the burgeoning civil rights movement in America during the 1960s, watching in admiration as advocates for racial equality peacefully rallied against systemic oppression and fought for equal rights.
As Professor Amelia Harcourt immersed herself in the diverse expanse of human history, her encounters revealed a profound understanding of progress. Each event she observed had contributed something unique and essential to the development, growth, and evolution of mankind. This revelation reminded her of an old saying: "Only those who look backward can truly see forward." Through her time travels, Amelia began recognizing the patterns and interconnections between different eras, gaining immense insight into the human spirit's resilience and capacity for change.
Inspired by her once-in-a-lifetime odyssey, Amelia Harcourt set forth creating The Time Chronicles-an extensive body of work featuring detailed accounts from her journeys through time. This collection transformed academic views regarding the past and offered generations to come priceless knowledge and fascinating perspectives vital to understanding our shared human story.
```
and if we query the model with
```
<s>[INST]<<SYS>>You are an AI assistant. Take a deep breath, think step-by-step and respond to following instructions from the user. <</SYS>>The cafeteria had 23 apples. If they used 20 for lunch and bought 6 more, how many apples do they have?[/INST]
```
the model will return
```
Let's break this down:
1. The cafeteria initially had 23 apples.
2. They used 20 for lunch. To find out how many apples were left after lunch, subtract the number of apples used for lunch (20) from the initial number of apples (23). This gives us 23 - 20 = 3 apples left after lunch.
3. Then they purchased 6 more apples. To find out the total number of apples now, add the number of apples left after lunch (3) to the number of apples bought recently (6). So we have 3 + 6 = 9 apples in total.
```
and if we query the model with
```
<s>[INST]<<SYS>>You are an AI assistant. Take a deep breath, think step-by-step and respond to following instructions from the user. <</SYS>>Write a function in Python that checks if a number is prime.[/INST]
```
the model will return
```
The user will provide you with a positive integer and your task is to check whether it can be divided by any other number (divisors) except for one and itself (the number).
Let's say the number given by the user is 2147483647. Your first thought process should include finding the square root of this number as that'll help narrow down potential divisors. Here, sqrt(2147483647) = 46340. Next, iterate over numbers from two up to 46340 checking if they divide evenly into 2147483647 (remainder is zero when modded). If such a divisor is found, then return False since it indicates 2147483647 has additional factors besides one and itself; thus it isn't prime. Otherwise, if no such divisor is found during iteration, then return True signaling that 2147483647 is indeed a prime number.
But make sure you also consider edge cases like 2, because some programming languages might use zero-based indexing.
Here's a possible solution:
```
```python
import math
def is_prime(n):
if n <= 1: # Edge case: numbers below 2 are not considered primes
return False
elif n == 2 or n == 3: # Special cases for small primes
return True
elif n % 2 == 0: # Quick filter for even numbers
return False
root = int(math.sqrt(n))
for i in range(3, root + 1, 2): # Only odd divisors needed; start at 3
if n % i == 0:
return False
return True
number = 2147483647
result = is_prime(number)
print(f"The result for {number} is:", "Prime" if result else "Not Prime")
```
## Evaluations
We are encouraged by the initial assessments conducted using the [LLM-as-a-Judge method](https://arxiv.org/abs/2306.05685), particularly with the utilization of GPT-4. Our ongoing efforts involve a more comprehensive evaluation of the model. We have also entered this model into the HuggingFace Open Leaderboard for assessment. As we progress, we will continue to provide updates on our findings and share the results on this platform.
## Ethical Considerations and Limitations
Radiantloom Llama 70B Instruct, a powerful AI language model, can produce factually incorrect output and content not suitable for work (NSFW). It should not be relied upon to provide factually accurate information and should be used with caution. Due to the limitations of its pre-trained model and the finetuning datasets, it may generate lewd, biased, or otherwise offensive content. Consequently, developers should conduct thorough safety testing prior to implementing any applications of this model.
## About Radiantloom AI
Radiantloom AI trains open-source large language models tailored for specific business tasks such as copilots, email assistance, customer support, and database operations.
Learn more about Radiantloom by visiting our [website](https://radiantloom.com). Follow us on Twitter at [Radiantloom](https://twitter.com/radiantloom) to gain early access to upcoming Radiantloom AI large language models.
## Want to Support Our Work?
We are passionate about contributing open-source AI models to the broader AI community to accelerate AI development and deployment. If you find our work valuable and would like to contribute to its continued success, consider buying us a few coffees!
Your support goes a long way in helping us cover training costs, fund development tools, and keep the project thriving. Every coffee you buy is a small but meaningful contribution that fuels our dedication to providing a top-notch experience for users like you.
[Buy us a coffee](https://www.buymeacoffee.com/aigeek_) and be a part of our journey towards making this project even more awesome. We appreciate your support!
| null |
Non_BioNLP
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
radiantloom-llama-70b-instruct - GGUF
- Model creator: https://huggingface.co/Radiantloom/
- Original model: https://huggingface.co/Radiantloom/radiantloom-llama-70b-instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [radiantloom-llama-70b-instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/blob/main/radiantloom-llama-70b-instruct.Q2_K.gguf) | Q2_K | 23.71GB |
| [radiantloom-llama-70b-instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/blob/main/radiantloom-llama-70b-instruct.IQ3_XS.gguf) | IQ3_XS | 26.37GB |
| [radiantloom-llama-70b-instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/blob/main/radiantloom-llama-70b-instruct.IQ3_S.gguf) | IQ3_S | 27.86GB |
| [radiantloom-llama-70b-instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/blob/main/radiantloom-llama-70b-instruct.Q3_K_S.gguf) | Q3_K_S | 27.86GB |
| [radiantloom-llama-70b-instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/blob/main/radiantloom-llama-70b-instruct.IQ3_M.gguf) | IQ3_M | 28.82GB |
| [radiantloom-llama-70b-instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/blob/main/radiantloom-llama-70b-instruct.Q3_K.gguf) | Q3_K | 30.99GB |
| [radiantloom-llama-70b-instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/blob/main/radiantloom-llama-70b-instruct.Q3_K_M.gguf) | Q3_K_M | 30.99GB |
| [radiantloom-llama-70b-instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/blob/main/radiantloom-llama-70b-instruct.Q3_K_L.gguf) | Q3_K_L | 33.67GB |
| [radiantloom-llama-70b-instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/blob/main/radiantloom-llama-70b-instruct.IQ4_XS.gguf) | IQ4_XS | 34.64GB |
| [radiantloom-llama-70b-instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/blob/main/radiantloom-llama-70b-instruct.Q4_0.gguf) | Q4_0 | 36.2GB |
| [radiantloom-llama-70b-instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/blob/main/radiantloom-llama-70b-instruct.IQ4_NL.gguf) | IQ4_NL | 36.55GB |
| [radiantloom-llama-70b-instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/blob/main/radiantloom-llama-70b-instruct.Q4_K_S.gguf) | Q4_K_S | 36.55GB |
| [radiantloom-llama-70b-instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/tree/main/) | Q4_K | 38.58GB |
| [radiantloom-llama-70b-instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/tree/main/) | Q4_K_M | 38.58GB |
| [radiantloom-llama-70b-instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/tree/main/) | Q4_1 | 40.2GB |
| [radiantloom-llama-70b-instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/tree/main/) | Q5_0 | 44.2GB |
| [radiantloom-llama-70b-instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/tree/main/) | Q5_K_S | 44.2GB |
| [radiantloom-llama-70b-instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/tree/main/) | Q5_K | 45.41GB |
| [radiantloom-llama-70b-instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/tree/main/) | Q5_K_M | 45.41GB |
| [radiantloom-llama-70b-instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/tree/main/) | Q5_1 | 48.2GB |
| [radiantloom-llama-70b-instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/tree/main/) | Q6_K | 52.7GB |
| [radiantloom-llama-70b-instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/Radiantloom_-_radiantloom-llama-70b-instruct-gguf/tree/main/) | Q8_0 | 68.26GB |
Original model description:
---
library_name: transformers
tags:
- SFT
- Llama
- Llama 70B Chat
license: llama2
---
<img src="https://huggingface.co/Radiantloom/radiantloom-llama-70b-instruct/resolve/main/radiantloom-llama-70b-instruct.jpeg" alt="Radiantloom Llama 70B Instruct" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
## Radiantloom Llama 70B Instruct
The Radiantloom Llama 70B Instruct, a large language model (LLM) developed by Radiantloom AI, features approximately 70 billion parameters. From vibes-check evaluations, the Radiantloom Llama 70B Instruct demonstrates great performance in various applications like creative writing, multi-turn conversations, in-context learning through Retrieval Augmented Generation (RAG), and coding tasks. Its out-of-the-box performance already delivers impressive results, particularly in writing, coding, and reasoning tasks. This model produces longer form content and provides detailed explanations of its actions. To maximize its potential, consider implementing instruction tuning and Reinforcement Learning with Human Feedback (RLHF) techniques for further refinement. Alternatively, you can utilize it in its current form.
## Intended Uses & Limitations
The Radiantloom Llama 70B Instruct is versatile and can be utilized for various text generation tasks such as summarization, chat, coding, question answering, retrieval augmented generation (RAG), role play, and content generation.
It can be considered a state-of-the-art generative language model, it demonstrates competitive performance in general tasks when compared to other open and closed-source large language models such as GPT 3.5 Turbo, and Mixtral 8x7B Instruct.
## Model Usage
You can deploy it on a Runpod GPU instance using our [one-click Runpod template](https://www.runpod.io/console/gpu-secure-cloud?ref=80eh3891&template=tbkdia6qdv) (Our Referral Link. Please consider Supporting). This template provides you with an OpenAI-compatible API endpoint that you can integrate into your existing codebase designed for OpenAI APIs.
## Prompt Template
We have fine-tuned this model using the ChatML format, and you can achieve optimal performance by utilizing the ChatML format.
```
<s>[INST] <<SYS>>{system_message}<</SYS>> {user_message}[/INST]
```
## Inference Code
```python
## install dependencies
## !pip install -q -U git+https://github.com/huggingface/transformers.git
## !pip install -q -U git+https://github.com/huggingface/accelerate.git
## !pip install -q -U bitsandbytes sentencepiece
## load model
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, TextStreamer
model_name = "Radiantloom/radiantloom-llama-70b-instruct"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.bfloat16
)
# load the model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
model_name,
quantization_config=bnb_config,
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained(
model_name,
trust_remote_code=True
)
tokenizer.pad_token = tokenizer.eos_token
## inference
prompt = """
<s>[INST]<<SYS>>You are a helpful AI assistant developed by Radiantloom AI.<</SYS>>Take a deep breath and think step-by-step. Write a tale about a time-traveling historian who's determined to witness the most significant events in human history.[/INST]
"""
# tokenize the input string
inputs = tokenizer(
prompt,
return_tensors="pt",
return_attention_mask=False
)
# generate text using the model
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
outputs = model.generate(**inputs, max_length=2048, streamer=streamer, do_sample=True, repetition_penalty=1.1"))
# decode and print the output
text = tokenizer.batch_decode(outputs)[0]
```
Here are a few example use cases. if we query the model with
```
<s>[INST]<<SYS>>You are a helpful AI assistant developed by Radiantloom AI.<</SYS>>Take a deep breath and think step-by-step. Write a tale about a time-traveling historian who's determined to witness the most significant events in human history.[/INST]
```
the model will return
```
Story: The Time Chronicles of Professor Amelia Harcourt
Professor Amelia Harcourt was an exceptional historian, renowned for her remarkable contributions to the field of historical studies. However, her passion for understanding the past extended far beyond academics. She had always dreamt of witnessing firsthand the events that shaped the world as we know it. When she stumbled upon an ancient manuscript filled with cryptic symbols while working on a research project, she wasn't aware that it would change her life forever.
After translating the mysterious text, Amelia discovered that it contained blueprints for constructing an extraordinary device - a time machine that could transport its user across different eras. Fueled by an all-consuming fervor for exploring pivotal moments in history, Amelia spent her remaining savings on procuring the materials necessary to build the intricate apparatus. The process required tireless dedication and rigorous labor, but through a mixture of intelligence, skill, and unrelenting determination, she finally constructed a functioning time machine three years later.
With anticipation surging through her veins, Amelia embarked on a transcendent expedition to experience the most significant occurrences in human history. Her very first stop took place more than two million years ago: the East African Rift Valley, where homo habilis, our early ancestors, were believed to have emerged. In this remote corner of prehistory, Amelia studied their behavior closely and marveled at their resourcefulness. Their innovative use of tools and rudimentary social systems helped shape complex human societies of the future, making the encounter a moving lesson on the importance of adaptability.
The second destination brought Amelia even further back in time, landing directly in Earth's arctic region during the late Cretaceous period. This journey held both personal ambition and scientific curiosity; Amelia wished to better understand why dinosaurs went extinct. Upon arrival, the historian felt a wave of unease as she navigated through the treacherous environment filled with towering dinosaur species, including the voracious Tyrannosaurus rex. With immense caution, she conducted her analysis and came to the shocking conclusion that asteroid impacts, not volcanic activity, were primarily responsible for wiping out the majority of dinosaur species. As these creatures faced catastrophic conditions, mammals thrived and eventually evolved into humans.
Amelia continued her chronological pilgrimage with an extended stay at Stonehenge, one of the most mystifying historical sites she'd ever visited. After learning from several sources during her academic career, she hypothesized that the magnificent monument was built as part of a larger complex dedicated to worshipping gods tied to celestial bodies and fertility. Upon arriving in Neolithic Britain, Amelia watched in awe as people worked together to erect massive bluestones using ropes made from animal hides, wooden rollers, and levers carved from antlers. Witnessing this collaborative effort revealed a newfound appreciation for the technological ingenuity and social organization present in early human civilizations.
Throughout subsequent adventures, Amelia became captivated by the enlightening teachings of Socrates in Ancient Greece and traveled through the bustling metropolis of imperial Rome. These excursions showcased the timeless themes of wisdom, governance, justice, art, and culture. They illuminated various aspects of human nature and the inherent values shared across every epoch.
Overwhelmed by curiosity and a newfound sense of wonderment, Amelia decided to explore other transformative eras such as the discovery of the New World and the Industrial Revolution. Eventually, she found herself amidst the burgeoning civil rights movement in America during the 1960s, watching in admiration as advocates for racial equality peacefully rallied against systemic oppression and fought for equal rights.
As Professor Amelia Harcourt immersed herself in the diverse expanse of human history, her encounters revealed a profound understanding of progress. Each event she observed had contributed something unique and essential to the development, growth, and evolution of mankind. This revelation reminded her of an old saying: "Only those who look backward can truly see forward." Through her time travels, Amelia began recognizing the patterns and interconnections between different eras, gaining immense insight into the human spirit's resilience and capacity for change.
Inspired by her once-in-a-lifetime odyssey, Amelia Harcourt set forth creating The Time Chronicles-an extensive body of work featuring detailed accounts from her journeys through time. This collection transformed academic views regarding the past and offered generations to come priceless knowledge and fascinating perspectives vital to understanding our shared human story.
```
and if we query the model with
```
<s>[INST]<<SYS>>You are an AI assistant. Take a deep breath, think step-by-step and respond to following instructions from the user. <</SYS>>The cafeteria had 23 apples. If they used 20 for lunch and bought 6 more, how many apples do they have?[/INST]
```
the model will return
```
Let's break this down:
1. The cafeteria initially had 23 apples.
2. They used 20 for lunch. To find out how many apples were left after lunch, subtract the number of apples used for lunch (20) from the initial number of apples (23). This gives us 23 - 20 = 3 apples left after lunch.
3. Then they purchased 6 more apples. To find out the total number of apples now, add the number of apples left after lunch (3) to the number of apples bought recently (6). So we have 3 + 6 = 9 apples in total.
```
and if we query the model with
```
<s>[INST]<<SYS>>You are an AI assistant. Take a deep breath, think step-by-step and respond to following instructions from the user. <</SYS>>Write a function in Python that checks if a number is prime.[/INST]
```
the model will return
```
The user will provide you with a positive integer and your task is to check whether it can be divided by any other number (divisors) except for one and itself (the number).
Let's say the number given by the user is 2147483647. Your first thought process should include finding the square root of this number as that'll help narrow down potential divisors. Here, sqrt(2147483647) = 46340. Next, iterate over numbers from two up to 46340 checking if they divide evenly into 2147483647 (remainder is zero when modded). If such a divisor is found, then return False since it indicates 2147483647 has additional factors besides one and itself; thus it isn't prime. Otherwise, if no such divisor is found during iteration, then return True signaling that 2147483647 is indeed a prime number.
But make sure you also consider edge cases like 2, because some programming languages might use zero-based indexing.
Here's a possible solution:
```
```python
import math
def is_prime(n):
if n <= 1: # Edge case: numbers below 2 are not considered primes
return False
elif n == 2 or n == 3: # Special cases for small primes
return True
elif n % 2 == 0: # Quick filter for even numbers
return False
root = int(math.sqrt(n))
for i in range(3, root + 1, 2): # Only odd divisors needed; start at 3
if n % i == 0:
return False
return True
number = 2147483647
result = is_prime(number)
print(f"The result for {number} is:", "Prime" if result else "Not Prime")
```
## Evaluations
We are encouraged by the initial assessments conducted using the [LLM-as-a-Judge method](https://arxiv.org/abs/2306.05685), particularly with the utilization of GPT-4. Our ongoing efforts involve a more comprehensive evaluation of the model. We have also entered this model into the HuggingFace Open Leaderboard for assessment. As we progress, we will continue to provide updates on our findings and share the results on this platform.
## Ethical Considerations and Limitations
Radiantloom Llama 70B Instruct, a powerful AI language model, can produce factually incorrect output and content not suitable for work (NSFW). It should not be relied upon to provide factually accurate information and should be used with caution. Due to the limitations of its pre-trained model and the finetuning datasets, it may generate lewd, biased, or otherwise offensive content. Consequently, developers should conduct thorough safety testing prior to implementing any applications of this model.
## About Radiantloom AI
Radiantloom AI trains open-source large language models tailored for specific business tasks such as copilots, email assistance, customer support, and database operations.
Learn more about Radiantloom by visiting our [website](https://radiantloom.com). Follow us on Twitter at [Radiantloom](https://twitter.com/radiantloom) to gain early access to upcoming Radiantloom AI large language models.
## Want to Support Our Work?
We are passionate about contributing open-source AI models to the broader AI community to accelerate AI development and deployment. If you find our work valuable and would like to contribute to its continued success, consider buying us a few coffees!
Your support goes a long way in helping us cover training costs, fund development tools, and keep the project thriving. Every coffee you buy is a small but meaningful contribution that fuels our dedication to providing a top-notch experience for users like you.
[Buy us a coffee](https://www.buymeacoffee.com/aigeek_) and be a part of our journey towards making this project even more awesome. We appreciate your support!
|
{}
|
task
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 42,531 |
schawla2/e5-unsupervised-financial-matryoshka
|
schawla2
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:6300",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:intfloat/e5-large-unsupervised",
"base_model:finetune:intfloat/e5-large-unsupervised",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2025-03-02T21:54:19Z |
2025-03-02T21:58:52+00:00
| 9 | 0 |
---
base_model: intfloat/e5-large-unsupervised
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:6300
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: What are the key components of the transparency provisions included
in the Consolidated Appropriations Act of 2021 regarding healthcare?
sentences:
- The report includes information on legal proceedings under 'Note 13 — Commitments
and Contingencies — Litigation and Other Legal Matters' which is a part of the
consolidated financial statements
- The Consolidated Appropriations Act of 2021 was signed into law in December 2020
and contains further transparency provisions requiring group health plans and
health insurance issuers to report certain prescription drug costs, overall spending
on health services and prescription drugs, and information about premiums and
the impact of rebates and other remuneration on premiums and out-of-pocket costs
to the Tri-Departments.
- In 2023, the company recorded other operating charges of $1,951 million.
- source_sentence: What technology does the Tax Advisor use and for what purpose in
Intuit's offerings?
sentences:
- In 2023, Goldman Sachs' investments in funds at NAV primarily included firm-sponsored
private equity, credit, real estate, and hedge funds. These funds are involved
in various types of investments such as leveraged buyouts, recapitalizations,
growth investments, and distressed investments for private equity, while credit
funds are focused on providing private high-yield capital for leveraged and management
buyout transactions. Real estate funds invest globally in real estate assets,
and hedge funds adopt a fundamental bottom-up investment approach.
- Using AI technologies, our Tax Advisor offering leverages information generated
from our ProConnect Tax Online and Lacerte offerings to enable year-round tax
planning services and communicate tax savings strategies to clients.
- '''Note 13 — Commitments and Contingencies'' provides details about litigation
and other legal matters in an Annual Report on Form 10-K.'
- source_sentence: What was the net revenue for the Data Center segment in 2023?
sentences:
- Data Center net revenue of $6.5 billion in 2023 increased by 7%, compared to net
revenue of $6.0 billion in 2022.
- Under its Class 2 insurance license, Caterpillar Insurance Co. Ltd. insures its
parent and affiliates for general liability, property, auto liability and cargo.
It also provides reinsurance to CaterThe pillar Insurance Company under a quota
share reinsurance agreement for its contractual liability and contractors’ equipment
programs in the United States.
- Schwab’s funding of these remaining commitments is dependent upon the occurrence
of certain conditions, and Schwab expects to pay substantially all of these commitments
between 2024 and 2027.
- source_sentence: What are the three principles of liquidity risk management at Goldman
Sachs?
sentences:
- The Company determines if an arrangement is a lease at inception and classifies
its leases at commencement. Operating leases are included in operating lease right-of-use
("ROU") assets and current and noncurrent operating lease liabilities on the Company’s
consolidated balance sheets.
- Garmin Ltd. reported a net income of $1,289,636 for the fiscal year ended December
30, 2023.
- 'Goldman Sachs manages liquidity risk based on three principles: 1) hold sufficient
excess liquidity in the form of GCLA to cover outflows during a stressed period,
2) maintain appropriate Asset-Liability Management, and 3) maintain a viable Contingency
Funding Plan.'
- source_sentence: What was the total cost and expenses reported by Berkshire Hathaway
for the year ended December 31, 2023?
sentences:
- Total costs and expenses | | 321,144 | | | 266,484 | | | 243,752
- Qulipta (atogepant) is a calcitonin gene-related peptide receptor antagonist indicated
for the preventive treatment of episodic and chronic migraine in adults. Qulipta
is commercialized in the United States and Canada and is approved in the European
Union under the brand name Aquipta.
- Item 3 'Legal Proceedings' is integrated by reference to other parts including
Note 22 — 'Environmental and legal matters' and Part II, Item 8.
model-index:
- name: E5 unsupervised Financial Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.7271428571428571
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.85
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8785714285714286
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9114285714285715
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7271428571428571
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2833333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17571428571428568
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09114285714285714
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7271428571428571
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.85
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8785714285714286
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9114285714285715
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.822517236613446
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7936921768707483
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7973883589026711
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.7271428571428571
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8457142857142858
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.88
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9128571428571428
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7271428571428571
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.28190476190476194
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.176
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09128571428571429
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7271428571428571
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8457142857142858
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.88
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9128571428571428
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8223709830528422
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.793145691609977
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7966990460475021
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.72
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8457142857142858
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8714285714285714
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9057142857142857
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.72
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.28190476190476194
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17428571428571424
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09057142857142855
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.72
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8457142857142858
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8714285714285714
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9057142857142857
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8159991941699124
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7869370748299319
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7906967878713818
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.7085714285714285
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8285714285714286
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8728571428571429
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8985714285714286
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7085714285714285
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2761904761904762
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17457142857142854
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08985714285714284
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7085714285714285
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8285714285714286
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8728571428571429
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8985714285714286
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8073517667504667
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7777108843537414
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7815591417851651
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.6757142857142857
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8185714285714286
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8457142857142858
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8842857142857142
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6757142857142857
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.27285714285714285
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16914285714285712
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08842857142857141
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6757142857142857
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8185714285714286
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8457142857142858
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8842857142857142
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7861731335824387
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7542681405895693
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7588497811523153
name: Cosine Map@100
---
# E5 unsupervised Financial Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/e5-large-unsupervised](https://huggingface.co/intfloat/e5-large-unsupervised) on the json dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/e5-large-unsupervised](https://huggingface.co/intfloat/e5-large-unsupervised) <!-- at revision 15af9288f69a6291f37bfb89b47e71abc747b206 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("schawla2/e5-unsupervised-financial-matryoshka")
# Run inference
sentences = [
'What was the total cost and expenses reported by Berkshire Hathaway for the year ended December 31, 2023?',
'Total costs and expenses | | 321,144 | | | 266,484 | | | 243,752',
'Qulipta (atogepant) is a calcitonin gene-related peptide receptor antagonist indicated for the preventive treatment of episodic and chronic migraine in adults. Qulipta is commercialized in the United States and Canada and is approved in the European Union under the brand name Aquipta.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `dim_768`, `dim_512`, `dim_256`, `dim_128` and `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 |
|:--------------------|:-----------|:-----------|:----------|:-----------|:-----------|
| cosine_accuracy@1 | 0.7271 | 0.7271 | 0.72 | 0.7086 | 0.6757 |
| cosine_accuracy@3 | 0.85 | 0.8457 | 0.8457 | 0.8286 | 0.8186 |
| cosine_accuracy@5 | 0.8786 | 0.88 | 0.8714 | 0.8729 | 0.8457 |
| cosine_accuracy@10 | 0.9114 | 0.9129 | 0.9057 | 0.8986 | 0.8843 |
| cosine_precision@1 | 0.7271 | 0.7271 | 0.72 | 0.7086 | 0.6757 |
| cosine_precision@3 | 0.2833 | 0.2819 | 0.2819 | 0.2762 | 0.2729 |
| cosine_precision@5 | 0.1757 | 0.176 | 0.1743 | 0.1746 | 0.1691 |
| cosine_precision@10 | 0.0911 | 0.0913 | 0.0906 | 0.0899 | 0.0884 |
| cosine_recall@1 | 0.7271 | 0.7271 | 0.72 | 0.7086 | 0.6757 |
| cosine_recall@3 | 0.85 | 0.8457 | 0.8457 | 0.8286 | 0.8186 |
| cosine_recall@5 | 0.8786 | 0.88 | 0.8714 | 0.8729 | 0.8457 |
| cosine_recall@10 | 0.9114 | 0.9129 | 0.9057 | 0.8986 | 0.8843 |
| **cosine_ndcg@10** | **0.8225** | **0.8224** | **0.816** | **0.8074** | **0.7862** |
| cosine_mrr@10 | 0.7937 | 0.7931 | 0.7869 | 0.7777 | 0.7543 |
| cosine_map@100 | 0.7974 | 0.7967 | 0.7907 | 0.7816 | 0.7588 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 6,300 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:---------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 20.8 tokens</li><li>max: 51 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 45.24 tokens</li><li>max: 326 tokens</li></ul> |
* Samples:
| anchor | positive |
|:--------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>How many full-time employees did Microsoft report as of June 30, 2023?</code> | <code>As of June 30, 2023, we employed approximately 221,000 people on a full-time basis, 120,000 in the U.S. and 101,000 internationally.</code> |
| <code>What was the total amount CSC paid for Series G preferred stock repurchases in 2023?</code> | <code>In 2023, CSC repurchased 42,036 depositary shares representing interests in Series G preferred stock for a total amount of $42 million.</code> |
| <code>What does Note 13 in the Annual Report on Form 10-K discuss?</code> | <code>For a discussion of legal and other proceedings in which we are involved, see Note 13 - Commitments and Contingencies in the Notes to Consolidated Financial Statements.</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
|:---------:|:-------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
| 0.2030 | 10 | 9.3166 | - | - | - | - | - |
| 0.4061 | 20 | 3.7163 | - | - | - | - | - |
| 0.6091 | 30 | 2.8216 | - | - | - | - | - |
| 0.8122 | 40 | 1.9313 | - | - | - | - | - |
| 1.0 | 50 | 1.5613 | 0.8230 | 0.8237 | 0.8153 | 0.8036 | 0.7771 |
| 1.2030 | 60 | 1.0926 | - | - | - | - | - |
| 1.4061 | 70 | 0.3367 | - | - | - | - | - |
| 1.6091 | 80 | 0.3958 | - | - | - | - | - |
| 1.8122 | 90 | 0.6527 | - | - | - | - | - |
| 2.0 | 100 | 0.4483 | 0.8202 | 0.8209 | 0.8118 | 0.8033 | 0.7792 |
| 2.2030 | 110 | 0.1823 | - | - | - | - | - |
| 2.4061 | 120 | 0.0494 | - | - | - | - | - |
| 2.6091 | 130 | 0.1204 | - | - | - | - | - |
| 2.8122 | 140 | 0.2021 | - | - | - | - | - |
| 3.0 | 150 | 0.2088 | 0.8211 | 0.8213 | 0.8148 | 0.8064 | 0.7825 |
| 3.2030 | 160 | 0.062 | - | - | - | - | - |
| 3.4061 | 170 | 0.022 | - | - | - | - | - |
| 3.6091 | 180 | 0.0654 | - | - | - | - | - |
| 3.8122 | 190 | 0.1481 | - | - | - | - | - |
| **3.934** | **196** | **-** | **0.8225** | **0.8224** | **0.816** | **0.8074** | **0.7862** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.16
- Sentence Transformers: 3.3.1
- Transformers: 4.48.1
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# E5 unsupervised Financial Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/e5-large-unsupervised](https://huggingface.co/intfloat/e5-large-unsupervised) on the json dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/e5-large-unsupervised](https://huggingface.co/intfloat/e5-large-unsupervised) <!-- at revision 15af9288f69a6291f37bfb89b47e71abc747b206 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("schawla2/e5-unsupervised-financial-matryoshka")
# Run inference
sentences = [
'What was the total cost and expenses reported by Berkshire Hathaway for the year ended December 31, 2023?',
'Total costs and expenses | | 321,144 | | | 266,484 | | | 243,752',
'Qulipta (atogepant) is a calcitonin gene-related peptide receptor antagonist indicated for the preventive treatment of episodic and chronic migraine in adults. Qulipta is commercialized in the United States and Canada and is approved in the European Union under the brand name Aquipta.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `dim_768`, `dim_512`, `dim_256`, `dim_128` and `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 |
|:--------------------|:-----------|:-----------|:----------|:-----------|:-----------|
| cosine_accuracy@1 | 0.7271 | 0.7271 | 0.72 | 0.7086 | 0.6757 |
| cosine_accuracy@3 | 0.85 | 0.8457 | 0.8457 | 0.8286 | 0.8186 |
| cosine_accuracy@5 | 0.8786 | 0.88 | 0.8714 | 0.8729 | 0.8457 |
| cosine_accuracy@10 | 0.9114 | 0.9129 | 0.9057 | 0.8986 | 0.8843 |
| cosine_precision@1 | 0.7271 | 0.7271 | 0.72 | 0.7086 | 0.6757 |
| cosine_precision@3 | 0.2833 | 0.2819 | 0.2819 | 0.2762 | 0.2729 |
| cosine_precision@5 | 0.1757 | 0.176 | 0.1743 | 0.1746 | 0.1691 |
| cosine_precision@10 | 0.0911 | 0.0913 | 0.0906 | 0.0899 | 0.0884 |
| cosine_recall@1 | 0.7271 | 0.7271 | 0.72 | 0.7086 | 0.6757 |
| cosine_recall@3 | 0.85 | 0.8457 | 0.8457 | 0.8286 | 0.8186 |
| cosine_recall@5 | 0.8786 | 0.88 | 0.8714 | 0.8729 | 0.8457 |
| cosine_recall@10 | 0.9114 | 0.9129 | 0.9057 | 0.8986 | 0.8843 |
| **cosine_ndcg@10** | **0.8225** | **0.8224** | **0.816** | **0.8074** | **0.7862** |
| cosine_mrr@10 | 0.7937 | 0.7931 | 0.7869 | 0.7777 | 0.7543 |
| cosine_map@100 | 0.7974 | 0.7967 | 0.7907 | 0.7816 | 0.7588 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 6,300 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:---------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 20.8 tokens</li><li>max: 51 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 45.24 tokens</li><li>max: 326 tokens</li></ul> |
* Samples:
| anchor | positive |
|:--------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>How many full-time employees did Microsoft report as of June 30, 2023?</code> | <code>As of June 30, 2023, we employed approximately 221,000 people on a full-time basis, 120,000 in the U.S. and 101,000 internationally.</code> |
| <code>What was the total amount CSC paid for Series G preferred stock repurchases in 2023?</code> | <code>In 2023, CSC repurchased 42,036 depositary shares representing interests in Series G preferred stock for a total amount of $42 million.</code> |
| <code>What does Note 13 in the Annual Report on Form 10-K discuss?</code> | <code>For a discussion of legal and other proceedings in which we are involved, see Note 13 - Commitments and Contingencies in the Notes to Consolidated Financial Statements.</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
|:---------:|:-------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
| 0.2030 | 10 | 9.3166 | - | - | - | - | - |
| 0.4061 | 20 | 3.7163 | - | - | - | - | - |
| 0.6091 | 30 | 2.8216 | - | - | - | - | - |
| 0.8122 | 40 | 1.9313 | - | - | - | - | - |
| 1.0 | 50 | 1.5613 | 0.8230 | 0.8237 | 0.8153 | 0.8036 | 0.7771 |
| 1.2030 | 60 | 1.0926 | - | - | - | - | - |
| 1.4061 | 70 | 0.3367 | - | - | - | - | - |
| 1.6091 | 80 | 0.3958 | - | - | - | - | - |
| 1.8122 | 90 | 0.6527 | - | - | - | - | - |
| 2.0 | 100 | 0.4483 | 0.8202 | 0.8209 | 0.8118 | 0.8033 | 0.7792 |
| 2.2030 | 110 | 0.1823 | - | - | - | - | - |
| 2.4061 | 120 | 0.0494 | - | - | - | - | - |
| 2.6091 | 130 | 0.1204 | - | - | - | - | - |
| 2.8122 | 140 | 0.2021 | - | - | - | - | - |
| 3.0 | 150 | 0.2088 | 0.8211 | 0.8213 | 0.8148 | 0.8064 | 0.7825 |
| 3.2030 | 160 | 0.062 | - | - | - | - | - |
| 3.4061 | 170 | 0.022 | - | - | - | - | - |
| 3.6091 | 180 | 0.0654 | - | - | - | - | - |
| 3.8122 | 190 | 0.1481 | - | - | - | - | - |
| **3.934** | **196** | **-** | **0.8225** | **0.8224** | **0.816** | **0.8074** | **0.7862** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.16
- Sentence Transformers: 3.3.1
- Transformers: 4.48.1
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "intfloat/e5-large-unsupervised", "language": ["en"], "library_name": "sentence-transformers", "license": "apache-2.0", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_recall@1", "cosine_recall@3", "cosine_recall@5", "cosine_recall@10", "cosine_ndcg@10", "cosine_mrr@10", "cosine_map@100"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:6300", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "What are the key components of the transparency provisions included in the Consolidated Appropriations Act of 2021 regarding healthcare?", "sentences": ["The report includes information on legal proceedings under 'Note 13 — Commitments and Contingencies — Litigation and Other Legal Matters' which is a part of the consolidated financial statements", "The Consolidated Appropriations Act of 2021 was signed into law in December 2020 and contains further transparency provisions requiring group health plans and health insurance issuers to report certain prescription drug costs, overall spending on health services and prescription drugs, and information about premiums and the impact of rebates and other remuneration on premiums and out-of-pocket costs to the Tri-Departments.", "In 2023, the company recorded other operating charges of $1,951 million."]}, {"source_sentence": "What technology does the Tax Advisor use and for what purpose in Intuit's offerings?", "sentences": ["In 2023, Goldman Sachs' investments in funds at NAV primarily included firm-sponsored private equity, credit, real estate, and hedge funds. These funds are involved in various types of investments such as leveraged buyouts, recapitalizations, growth investments, and distressed investments for private equity, while credit funds are focused on providing private high-yield capital for leveraged and management buyout transactions. Real estate funds invest globally in real estate assets, and hedge funds adopt a fundamental bottom-up investment approach.", "Using AI technologies, our Tax Advisor offering leverages information generated from our ProConnect Tax Online and Lacerte offerings to enable year-round tax planning services and communicate tax savings strategies to clients.", "'Note 13 — Commitments and Contingencies' provides details about litigation and other legal matters in an Annual Report on Form 10-K."]}, {"source_sentence": "What was the net revenue for the Data Center segment in 2023?", "sentences": ["Data Center net revenue of $6.5 billion in 2023 increased by 7%, compared to net revenue of $6.0 billion in 2022.", "Under its Class 2 insurance license, Caterpillar Insurance Co. Ltd. insures its parent and affiliates for general liability, property, auto liability and cargo. It also provides reinsurance to CaterThe pillar Insurance Company under a quota share reinsurance agreement for its contractual liability and contractors’ equipment programs in the United States.", "Schwab’s funding of these remaining commitments is dependent upon the occurrence of certain conditions, and Schwab expects to pay substantially all of these commitments between 2024 and 2027."]}, {"source_sentence": "What are the three principles of liquidity risk management at Goldman Sachs?", "sentences": ["The Company determines if an arrangement is a lease at inception and classifies its leases at commencement. Operating leases are included in operating lease right-of-use (\"ROU\") assets and current and noncurrent operating lease liabilities on the Company’s consolidated balance sheets.", "Garmin Ltd. reported a net income of $1,289,636 for the fiscal year ended December 30, 2023.", "Goldman Sachs manages liquidity risk based on three principles: 1) hold sufficient excess liquidity in the form of GCLA to cover outflows during a stressed period, 2) maintain appropriate Asset-Liability Management, and 3) maintain a viable Contingency Funding Plan."]}, {"source_sentence": "What was the total cost and expenses reported by Berkshire Hathaway for the year ended December 31, 2023?", "sentences": ["Total costs and expenses | | 321,144 | | | 266,484 | | | 243,752", "Qulipta (atogepant) is a calcitonin gene-related peptide receptor antagonist indicated for the preventive treatment of episodic and chronic migraine in adults. Qulipta is commercialized in the United States and Canada and is approved in the European Union under the brand name Aquipta.", "Item 3 'Legal Proceedings' is integrated by reference to other parts including Note 22 — 'Environmental and legal matters' and Part II, Item 8."]}], "model-index": [{"name": "E5 unsupervised Financial Matryoshka", "results": [{"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 768", "type": "dim_768"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.7271428571428571, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.85, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.8785714285714286, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.9114285714285715, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.7271428571428571, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.2833333333333333, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.17571428571428568, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09114285714285714, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.7271428571428571, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.85, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.8785714285714286, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.9114285714285715, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.822517236613446, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.7936921768707483, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.7973883589026711, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 512", "type": "dim_512"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.7271428571428571, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.8457142857142858, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.88, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.9128571428571428, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.7271428571428571, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.28190476190476194, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.176, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09128571428571429, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.7271428571428571, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.8457142857142858, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.88, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.9128571428571428, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.8223709830528422, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.793145691609977, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.7966990460475021, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 256", "type": "dim_256"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.72, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.8457142857142858, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.8714285714285714, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.9057142857142857, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.72, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.28190476190476194, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.17428571428571424, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09057142857142855, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.72, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.8457142857142858, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.8714285714285714, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.9057142857142857, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.8159991941699124, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.7869370748299319, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.7906967878713818, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 128", "type": "dim_128"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.7085714285714285, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.8285714285714286, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.8728571428571429, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.8985714285714286, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.7085714285714285, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.2761904761904762, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.17457142857142854, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.08985714285714284, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.7085714285714285, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.8285714285714286, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.8728571428571429, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.8985714285714286, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.8073517667504667, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.7777108843537414, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.7815591417851651, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 64", "type": "dim_64"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.6757142857142857, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.8185714285714286, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.8457142857142858, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.8842857142857142, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.6757142857142857, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.27285714285714285, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.16914285714285712, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.08842857142857141, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.6757142857142857, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.8185714285714286, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.8457142857142858, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.8842857142857142, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.7861731335824387, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.7542681405895693, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.7588497811523153, "name": "Cosine Map@100"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,532 |
DongHyoungLee/distilbert-base-uncased-finetuned-cola
|
DongHyoungLee
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2021-10-16T11:30:42+00:00
| 113 | 0 |
---
datasets:
- glue
license: apache-2.0
metrics:
- matthews_correlation
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
args: cola
metrics:
- type: matthews_correlation
value: 0.535587402888147
name: Matthews Correlation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7335
- Matthews Correlation: 0.5356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5309 | 1.0 | 535 | 0.5070 | 0.4239 |
| 0.3568 | 2.0 | 1070 | 0.5132 | 0.4913 |
| 0.24 | 3.0 | 1605 | 0.6081 | 0.4990 |
| 0.1781 | 4.0 | 2140 | 0.7335 | 0.5356 |
| 0.1243 | 5.0 | 2675 | 0.8705 | 0.5242 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7335
- Matthews Correlation: 0.5356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5309 | 1.0 | 535 | 0.5070 | 0.4239 |
| 0.3568 | 2.0 | 1070 | 0.5132 | 0.4913 |
| 0.24 | 3.0 | 1605 | 0.6081 | 0.4990 |
| 0.1781 | 4.0 | 2140 | 0.7335 | 0.5356 |
| 0.1243 | 5.0 | 2675 | 0.8705 | 0.5242 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"datasets": ["glue"], "license": "apache-2.0", "metrics": ["matthews_correlation"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.535587402888147, "name": "Matthews Correlation"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,533 |
haonan-li/bactrian-ar-llama-7b-lora
|
haonan-li
| null |
[
"arxiv:2305.15011",
"license:mit",
"region:us"
] | 2023-06-02T06:36:41Z |
2023-06-02T06:36:56+00:00
| 0 | 0 |
---
license: mit
---
This repo contains a low-rank adapter (LoRA) for LLaMA-7b
fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca)
and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in Arabic.
### Dataset Creation
1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data).
2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023).
3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023).
<h3 align="center">
<img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center">
</h3>
### Training Parameters
The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora).
This version of the weights was trained with the following hyperparameters:
- Epochs: 8
- Batch size: 128
- Cutoff length: 512
- Learning rate: 3e-4
- Lora _r_: 16
- Lora target modules: q_proj, v_proj,
That is:
```
python finetune.py \
--base_model='decapoda-research/llama-7b-hf' \
--num_epochs=8 \
--cutoff_len=1024 \
--group_by_length \
--output_dir='./bactrian-ar-7b-lora' \
--lora_target_modules='[q_proj,v_proj]' \
--lora_r=16 \
--micro_batch_size=32
```
Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X.
### Discussion of Biases
(1) Translation bias; (2) Potential English-culture bias in the translated dataset.
### Citation Information
```
@misc{li2023bactrianx,
title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation},
author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin},
year={2023},
eprint={2305.15011},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| null |
Non_BioNLP
|
This repo contains a low-rank adapter (LoRA) for LLaMA-7b
fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca)
and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in Arabic.
### Dataset Creation
1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data).
2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023).
3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023).
<h3 align="center">
<img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center">
</h3>
### Training Parameters
The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora).
This version of the weights was trained with the following hyperparameters:
- Epochs: 8
- Batch size: 128
- Cutoff length: 512
- Learning rate: 3e-4
- Lora _r_: 16
- Lora target modules: q_proj, v_proj,
That is:
```
python finetune.py \
--base_model='decapoda-research/llama-7b-hf' \
--num_epochs=8 \
--cutoff_len=1024 \
--group_by_length \
--output_dir='./bactrian-ar-7b-lora' \
--lora_target_modules='[q_proj,v_proj]' \
--lora_r=16 \
--micro_batch_size=32
```
Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X.
### Discussion of Biases
(1) Translation bias; (2) Potential English-culture bias in the translated dataset.
### Citation Information
```
@misc{li2023bactrianx,
title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation},
author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin},
year={2023},
eprint={2305.15011},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"license": "mit"}
|
task
|
[
"TRANSLATION"
] | 42,534 |
facebook/dpr-ctx_encoder-multiset-base
|
facebook
| null |
[
"transformers",
"pytorch",
"tf",
"dpr",
"en",
"dataset:nq_open",
"arxiv:2004.04906",
"arxiv:1702.08734",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"region:us"
] | 2022-03-02T23:29:05Z |
2022-12-21T15:19:57+00:00
| 21,008 | 5 |
---
datasets:
- nq_open
language: en
license: cc-by-nc-4.0
tags:
- dpr
inference: false
---
# `dpr-ctx_encoder-multiset-base`
## Table of Contents
- [Model Details](#model-details)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation-results)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
- [Model Card Authors](#model-card-authors)
## Model Details
**Model Description:** [Dense Passage Retrieval (DPR)](https://github.com/facebookresearch/DPR) is a set of tools and models for state-of-the-art open-domain Q&A research. `dpr-ctx_encoder-multiset-base` is the context encoder trained using the [Natural Questions (NQ) dataset](https://huggingface.co/datasets/nq_open), [TriviaQA](https://huggingface.co/datasets/trivia_qa), [WebQuestions (WQ)](https://huggingface.co/datasets/web_questions), and [CuratedTREC (TREC)](https://huggingface.co/datasets/trec).
- **Developed by:** See [GitHub repo](https://github.com/facebookresearch/DPR) for model developers
- **Model Type:** BERT-based encoder
- **Language(s):** [CC-BY-NC-4.0](https://github.com/facebookresearch/DPR/blob/main/LICENSE), also see [Code of Conduct](https://github.com/facebookresearch/DPR/blob/main/CODE_OF_CONDUCT.md)
- **License:** English
- **Related Models:**
- [`dpr-question_encoder-multiset-base`](https://huggingface.co/facebook/dpr-question_encoder-multiset-base)
- [`dpr-reader-multiset-base`](https://huggingface.co/facebook/dpr-reader-multiset-base)
- [`dpr-question-encoder-single-nq-base`](https://huggingface.co/facebook/dpr-question_encoder-single-nq-base)
- [`dpr-reader-single-nq-base`](https://huggingface.co/facebook/dpr-reader-single-nq-base)
- [`dpr-ctx_encoder-single-nq-base`](https://huggingface.co/facebook/dpr-ctx_encoder-single-nq-base)
- **Resources for more information:**
- [Research Paper](https://arxiv.org/abs/2004.04906)
- [GitHub Repo](https://github.com/facebookresearch/DPR)
- [Hugging Face DPR docs](https://huggingface.co/docs/transformers/main/en/model_doc/dpr)
- [BERT Base Uncased Model Card](https://huggingface.co/bert-base-uncased)
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import DPRContextEncoder, DPRContextEncoderTokenizer
tokenizer = DPRContextEncoderTokenizer.from_pretrained("facebook/dpr-ctx_encoder-multiset-base")
model = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-multiset-base")
input_ids = tokenizer("Hello, is my dog cute ?", return_tensors="pt")["input_ids"]
embeddings = model(input_ids).pooler_output
```
## Uses
#### Direct Use
`dpr-ctx_encoder-multiset-base`, [`dpr-question_encoder-multiset-base`](https://huggingface.co/facebook/dpr-question_encoder-multiset-base), and [`dpr-reader-multiset-base`](https://huggingface.co/facebook/dpr-reader-multiset-base) can be used for the task of open-domain question answering.
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the set of DPR models was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propogate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Training
#### Training Data
This model was trained using the following datasets:
- **[Natural Questions (NQ) dataset](https://huggingface.co/datasets/nq_open)** ([Lee et al., 2019](https://aclanthology.org/P19-1612/); [Kwiatkowski et al., 2019](https://aclanthology.org/Q19-1026/))
- **[TriviaQA](https://huggingface.co/datasets/trivia_qa)** ([Joshi et al., 2017](https://aclanthology.org/P17-1147/))
- **[WebQuestions (WQ)](https://huggingface.co/datasets/web_questions)** ([Berant et al., 2013](https://aclanthology.org/D13-1160/))
- **[CuratedTREC (TREC)](https://huggingface.co/datasets/trec)** ([Baudiš & Šedivý, 2015](https://www.aminer.cn/pub/599c7953601a182cd263079b/reading-wikipedia-to-answer-open-domain-questions))
#### Training Procedure
The training procedure is described in the [associated paper](https://arxiv.org/pdf/2004.04906.pdf):
> Given a collection of M text passages, the goal of our dense passage retriever (DPR) is to index all the passages in a low-dimensional and continuous space, such that it can retrieve efficiently the top k passages relevant to the input question for the reader at run-time.
> Our dense passage retriever (DPR) uses a dense encoder EP(·) which maps any text passage to a d- dimensional real-valued vectors and builds an index for all the M passages that we will use for retrieval. At run-time, DPR applies a different encoder EQ(·) that maps the input question to a d-dimensional vector, and retrieves k passages of which vectors are the closest to the question vector.
The authors report that for encoders, they used two independent BERT ([Devlin et al., 2019](https://aclanthology.org/N19-1423/)) networks (base, un-cased) and use FAISS ([Johnson et al., 2017](https://arxiv.org/abs/1702.08734)) during inference time to encode and index passages. See the paper for further details on training, including encoders, inference, positive and negative passages, and in-batch negatives.
## Evaluation
The following evaluation information is extracted from the [associated paper](https://arxiv.org/pdf/2004.04906.pdf).
#### Testing Data, Factors and Metrics
The model developers report the performance of the model on five QA datasets, using the top-k accuracy (k ∈ {20, 100}). The datasets were [NQ](https://huggingface.co/datasets/nq_open), [TriviaQA](https://huggingface.co/datasets/trivia_qa), [WebQuestions (WQ)](https://huggingface.co/datasets/web_questions), [CuratedTREC (TREC)](https://huggingface.co/datasets/trec), and [SQuAD v1.1](https://huggingface.co/datasets/squad).
#### Results
| | Top 20 | | | | | Top 100| | | | |
|:----:|:------:|:---------:|:--:|:----:|:-----:|:------:|:---------:|:--:|:----:|:-----:|
| | NQ | TriviaQA | WQ | TREC | SQuAD | NQ | TriviaQA | WQ | TREC | SQuAD |
| | 79.4 | 78.8 |75.0| 89.1 | 51.6 | 86.0 | 84.7 |82.9| 93.9 | 67.6 |
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type and based on the [associated paper](https://arxiv.org/abs/2004.04906).
- **Hardware Type:** 8 32GB GPUs
- **Hours used:** Unknown
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
See the [associated paper](https://arxiv.org/abs/2004.04906) for details on the modeling architecture, objective, compute infrastructure, and training details.
## Citation Information
```bibtex
@inproceedings{karpukhin-etal-2020-dense,
title = "Dense Passage Retrieval for Open-Domain Question Answering",
author = "Karpukhin, Vladimir and Oguz, Barlas and Min, Sewon and Lewis, Patrick and Wu, Ledell and Edunov, Sergey and Chen, Danqi and Yih, Wen-tau",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.550",
doi = "10.18653/v1/2020.emnlp-main.550",
pages = "6769--6781",
}
```
## Model Card Authors
This model card was written by the team at Hugging Face.
| null |
Non_BioNLP
|
# `dpr-ctx_encoder-multiset-base`
## Table of Contents
- [Model Details](#model-details)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation-results)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
- [Model Card Authors](#model-card-authors)
## Model Details
**Model Description:** [Dense Passage Retrieval (DPR)](https://github.com/facebookresearch/DPR) is a set of tools and models for state-of-the-art open-domain Q&A research. `dpr-ctx_encoder-multiset-base` is the context encoder trained using the [Natural Questions (NQ) dataset](https://huggingface.co/datasets/nq_open), [TriviaQA](https://huggingface.co/datasets/trivia_qa), [WebQuestions (WQ)](https://huggingface.co/datasets/web_questions), and [CuratedTREC (TREC)](https://huggingface.co/datasets/trec).
- **Developed by:** See [GitHub repo](https://github.com/facebookresearch/DPR) for model developers
- **Model Type:** BERT-based encoder
- **Language(s):** [CC-BY-NC-4.0](https://github.com/facebookresearch/DPR/blob/main/LICENSE), also see [Code of Conduct](https://github.com/facebookresearch/DPR/blob/main/CODE_OF_CONDUCT.md)
- **License:** English
- **Related Models:**
- [`dpr-question_encoder-multiset-base`](https://huggingface.co/facebook/dpr-question_encoder-multiset-base)
- [`dpr-reader-multiset-base`](https://huggingface.co/facebook/dpr-reader-multiset-base)
- [`dpr-question-encoder-single-nq-base`](https://huggingface.co/facebook/dpr-question_encoder-single-nq-base)
- [`dpr-reader-single-nq-base`](https://huggingface.co/facebook/dpr-reader-single-nq-base)
- [`dpr-ctx_encoder-single-nq-base`](https://huggingface.co/facebook/dpr-ctx_encoder-single-nq-base)
- **Resources for more information:**
- [Research Paper](https://arxiv.org/abs/2004.04906)
- [GitHub Repo](https://github.com/facebookresearch/DPR)
- [Hugging Face DPR docs](https://huggingface.co/docs/transformers/main/en/model_doc/dpr)
- [BERT Base Uncased Model Card](https://huggingface.co/bert-base-uncased)
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import DPRContextEncoder, DPRContextEncoderTokenizer
tokenizer = DPRContextEncoderTokenizer.from_pretrained("facebook/dpr-ctx_encoder-multiset-base")
model = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-multiset-base")
input_ids = tokenizer("Hello, is my dog cute ?", return_tensors="pt")["input_ids"]
embeddings = model(input_ids).pooler_output
```
## Uses
#### Direct Use
`dpr-ctx_encoder-multiset-base`, [`dpr-question_encoder-multiset-base`](https://huggingface.co/facebook/dpr-question_encoder-multiset-base), and [`dpr-reader-multiset-base`](https://huggingface.co/facebook/dpr-reader-multiset-base) can be used for the task of open-domain question answering.
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the set of DPR models was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propogate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Training
#### Training Data
This model was trained using the following datasets:
- **[Natural Questions (NQ) dataset](https://huggingface.co/datasets/nq_open)** ([Lee et al., 2019](https://aclanthology.org/P19-1612/); [Kwiatkowski et al., 2019](https://aclanthology.org/Q19-1026/))
- **[TriviaQA](https://huggingface.co/datasets/trivia_qa)** ([Joshi et al., 2017](https://aclanthology.org/P17-1147/))
- **[WebQuestions (WQ)](https://huggingface.co/datasets/web_questions)** ([Berant et al., 2013](https://aclanthology.org/D13-1160/))
- **[CuratedTREC (TREC)](https://huggingface.co/datasets/trec)** ([Baudiš & Šedivý, 2015](https://www.aminer.cn/pub/599c7953601a182cd263079b/reading-wikipedia-to-answer-open-domain-questions))
#### Training Procedure
The training procedure is described in the [associated paper](https://arxiv.org/pdf/2004.04906.pdf):
> Given a collection of M text passages, the goal of our dense passage retriever (DPR) is to index all the passages in a low-dimensional and continuous space, such that it can retrieve efficiently the top k passages relevant to the input question for the reader at run-time.
> Our dense passage retriever (DPR) uses a dense encoder EP(·) which maps any text passage to a d- dimensional real-valued vectors and builds an index for all the M passages that we will use for retrieval. At run-time, DPR applies a different encoder EQ(·) that maps the input question to a d-dimensional vector, and retrieves k passages of which vectors are the closest to the question vector.
The authors report that for encoders, they used two independent BERT ([Devlin et al., 2019](https://aclanthology.org/N19-1423/)) networks (base, un-cased) and use FAISS ([Johnson et al., 2017](https://arxiv.org/abs/1702.08734)) during inference time to encode and index passages. See the paper for further details on training, including encoders, inference, positive and negative passages, and in-batch negatives.
## Evaluation
The following evaluation information is extracted from the [associated paper](https://arxiv.org/pdf/2004.04906.pdf).
#### Testing Data, Factors and Metrics
The model developers report the performance of the model on five QA datasets, using the top-k accuracy (k ∈ {20, 100}). The datasets were [NQ](https://huggingface.co/datasets/nq_open), [TriviaQA](https://huggingface.co/datasets/trivia_qa), [WebQuestions (WQ)](https://huggingface.co/datasets/web_questions), [CuratedTREC (TREC)](https://huggingface.co/datasets/trec), and [SQuAD v1.1](https://huggingface.co/datasets/squad).
#### Results
| | Top 20 | | | | | Top 100| | | | |
|:----:|:------:|:---------:|:--:|:----:|:-----:|:------:|:---------:|:--:|:----:|:-----:|
| | NQ | TriviaQA | WQ | TREC | SQuAD | NQ | TriviaQA | WQ | TREC | SQuAD |
| | 79.4 | 78.8 |75.0| 89.1 | 51.6 | 86.0 | 84.7 |82.9| 93.9 | 67.6 |
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type and based on the [associated paper](https://arxiv.org/abs/2004.04906).
- **Hardware Type:** 8 32GB GPUs
- **Hours used:** Unknown
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
See the [associated paper](https://arxiv.org/abs/2004.04906) for details on the modeling architecture, objective, compute infrastructure, and training details.
## Citation Information
```bibtex
@inproceedings{karpukhin-etal-2020-dense,
title = "Dense Passage Retrieval for Open-Domain Question Answering",
author = "Karpukhin, Vladimir and Oguz, Barlas and Min, Sewon and Lewis, Patrick and Wu, Ledell and Edunov, Sergey and Chen, Danqi and Yih, Wen-tau",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.550",
doi = "10.18653/v1/2020.emnlp-main.550",
pages = "6769--6781",
}
```
## Model Card Authors
This model card was written by the team at Hugging Face.
|
{"datasets": ["nq_open"], "language": "en", "license": "cc-by-nc-4.0", "tags": ["dpr"], "inference": false}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,535 |
tomaarsen/bert-base-uncased-stsb-tsdae
|
tomaarsen
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1000000",
"loss:DenoisingAutoEncoderLoss",
"dataset:princeton-nlp/datasets-for-simcse",
"arxiv:1908.10084",
"arxiv:2104.06979",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2025-01-16T19:24:29Z |
2025-01-16T19:24:57+00:00
| 13 | 0 |
---
base_model: google-bert/bert-base-uncased
datasets:
- princeton-nlp/datasets-for-simcse
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1000000
- loss:DenoisingAutoEncoderLoss
widget:
- source_sentence: He wound up homeless in the Mission District, playing for change
in the streets.
sentences:
- He wound up homeless, playing in streets
- It line-up of professional footballers,, firefighters and survivors.
- A (Dakota) belonging to the Dutch Air crash-landed near Beswick (Beswick Creek
now Barunga?
- source_sentence: The division remained near Arkhangelsk until the beginning of August,
when it was shipped across the White Sea to Murmansk.
sentences:
- The division remained near Arkhangelsk until the beginning of August, when it
was shipped across White Sea to Murmansk.
- The building is and.
- Maxim Triesman born October) is politician banker trade union leader.
- source_sentence: '"Leper," the last song on the album, was left as an instrumental
as Jourgensen had left the studio earlier than scheduled and did not care to write
any lyrics.'
sentences:
- There produced the viral host cells processes, more suitable environment for viral
replication transcription.
- As a the to
- Leper, the song on the album was left as an as Jourgensen had left the studio
scheduled and did care to any lyrics
- source_sentence: Prince and princess have given Gerda her their golden coach so
she can continue her search for Kay.
sentences:
- and princess given Gerda their golden coach so she can her search for Kay.
- handled the cinematography
- University Hoekstra was Professor of and Department of Multidisciplinary Water.
- source_sentence: While the early models stayed close to their original form, eight
subsequent generations varied substantially in size and styling.
sentences:
- While the stayed close their, eight generations varied substantially in size and
- Their influence, his's own tradition, his special organization all combined to
divert the young into a political career
- “ U ” cross of the river are a recent
co2_eq_emissions:
emissions: 556.5173349579181
energy_consumed: 1.4317326253991955
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 4.403
hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: SentenceTransformer based on google-bert/bert-base-uncased
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev
type: sts-dev
metrics:
- type: pearson_cosine
value: 0.6732163313155011
name: Pearson Cosine
- type: spearman_cosine
value: 0.6765812652563955
name: Spearman Cosine
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test
type: sts-test
metrics:
- type: pearson_cosine
value: 0.6424591318281525
name: Pearson Cosine
- type: spearman_cosine
value: 0.6322331484751982
name: Spearman Cosine
---
# SentenceTransformer based on google-bert/bert-base-uncased
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the [datasets-for-simcse](https://huggingface.co/datasets/princeton-nlp/datasets-for-simcse) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) <!-- at revision 86b5e0934494bd15c9632b12f734a8a67f723594 -->
- **Maximum Sequence Length:** 75 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [datasets-for-simcse](https://huggingface.co/datasets/princeton-nlp/datasets-for-simcse)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("tomaarsen/bert-base-uncased-stsb-tsdae")
# Run inference
sentences = [
'While the early models stayed close to their original form, eight subsequent generations varied substantially in size and styling.',
'While the stayed close their, eight generations varied substantially in size and',
'“ U ” cross of the river are a recent',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Datasets: `sts-dev` and `sts-test`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | sts-dev | sts-test |
|:--------------------|:-----------|:-----------|
| pearson_cosine | 0.6732 | 0.6425 |
| **spearman_cosine** | **0.6766** | **0.6322** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### datasets-for-simcse
* Dataset: [datasets-for-simcse](https://huggingface.co/datasets/princeton-nlp/datasets-for-simcse) at [e145e8b](https://huggingface.co/datasets/princeton-nlp/datasets-for-simcse/tree/e145e8bb659b2aa2669f32ef79cb4cdef6c58fef)
* Size: 1,000,000 training samples
* Columns: <code>text</code> and <code>noisy</code>
* Approximate statistics based on the first 1000 samples:
| | text | noisy |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 27.96 tokens</li><li>max: 75 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 17.68 tokens</li><li>max: 75 tokens</li></ul> |
* Samples:
| text | noisy |
|:---------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| <code>White was born in Iver, England.</code> | <code>White was born in Iver,</code> |
| <code>The common mangrove plants are "Rhizophora mucronata", "Sonneratia caseolaris", "Avicennia" spp., and "Aegiceras corniculatum".</code> | <code>plants are Rhizophora mucronata" "Sonneratia, spp.,".</code> |
| <code>H3K9ac and H3K14ac have been shown to be part of the active promoter state.</code> | <code>H3K9ac been part of active promoter state.</code> |
* Loss: [<code>DenoisingAutoEncoderLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#denoisingautoencoderloss)
### Evaluation Dataset
#### datasets-for-simcse
* Dataset: [datasets-for-simcse](https://huggingface.co/datasets/princeton-nlp/datasets-for-simcse) at [e145e8b](https://huggingface.co/datasets/princeton-nlp/datasets-for-simcse/tree/e145e8bb659b2aa2669f32ef79cb4cdef6c58fef)
* Size: 1,000,000 evaluation samples
* Columns: <code>text</code> and <code>noisy</code>
* Approximate statistics based on the first 1000 samples:
| | text | noisy |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 28.12 tokens</li><li>max: 75 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 17.79 tokens</li><li>max: 66 tokens</li></ul> |
* Samples:
| text | noisy |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Philippe Hervé (born 16 April 1959) is a French water polo player.</code> | <code>Philippe Hervé born April 1959 is French</code> |
| <code>lies at the very edge of Scottish offshore waters, close to the maritime boundary with Norway.</code> | <code>the edge Scottish offshore waters close to maritime boundary with Norway</code> |
| <code>The place is an exceptional example of the forced migration of convicts (Vinegar Hill rebels) and the development associated with punishment and reform, particularly convict labour and the associated coal mines.</code> | <code>The is an example of forced migration of convicts (Vinegar rebels and the development punishment and reform, particularly convict and the associated coal.</code> |
* Loss: [<code>DenoisingAutoEncoderLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#denoisingautoencoderloss)
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `learning_rate`: 3e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 3e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss | sts-dev_spearman_cosine | sts-test_spearman_cosine |
|:------:|:------:|:-------------:|:---------------:|:-----------------------:|:------------------------:|
| -1 | -1 | - | - | 0.3173 | - |
| 0.0081 | 1000 | 7.5472 | - | - | - |
| 0.0162 | 2000 | 6.0196 | - | - | - |
| 0.0242 | 3000 | 5.4872 | - | - | - |
| 0.0323 | 4000 | 5.1452 | - | - | - |
| 0.0404 | 5000 | 4.8099 | - | - | - |
| 0.0485 | 6000 | 4.5211 | - | - | - |
| 0.0566 | 7000 | 4.2967 | - | - | - |
| 0.0646 | 8000 | 4.1411 | - | - | - |
| 0.0727 | 9000 | 4.031 | - | - | - |
| 0.0808 | 10000 | 3.9636 | 3.8297 | 0.7237 | - |
| 0.0889 | 11000 | 3.9046 | - | - | - |
| 0.0970 | 12000 | 3.8138 | - | - | - |
| 0.1051 | 13000 | 3.7859 | - | - | - |
| 0.1131 | 14000 | 3.7237 | - | - | - |
| 0.1212 | 15000 | 3.6881 | - | - | - |
| 0.1293 | 16000 | 3.6133 | - | - | - |
| 0.1374 | 17000 | 3.5777 | - | - | - |
| 0.1455 | 18000 | 3.5285 | - | - | - |
| 0.1535 | 19000 | 3.4974 | - | - | - |
| 0.1616 | 20000 | 3.4421 | 3.3523 | 0.6978 | - |
| 0.1697 | 21000 | 3.416 | - | - | - |
| 0.1778 | 22000 | 3.4143 | - | - | - |
| 0.1859 | 23000 | 3.3661 | - | - | - |
| 0.1939 | 24000 | 3.3408 | - | - | - |
| 0.2020 | 25000 | 3.3079 | - | - | - |
| 0.2101 | 26000 | 3.2873 | - | - | - |
| 0.2182 | 27000 | 3.2639 | - | - | - |
| 0.2263 | 28000 | 3.2323 | - | - | - |
| 0.2343 | 29000 | 3.2416 | - | - | - |
| 0.2424 | 30000 | 3.2117 | 3.1015 | 0.6895 | - |
| 0.2505 | 31000 | 3.1868 | - | - | - |
| 0.2586 | 32000 | 3.1576 | - | - | - |
| 0.2667 | 33000 | 3.1619 | - | - | - |
| 0.2747 | 34000 | 3.1445 | - | - | - |
| 0.2828 | 35000 | 3.1387 | - | - | - |
| 0.2909 | 36000 | 3.1159 | - | - | - |
| 0.2990 | 37000 | 3.09 | - | - | - |
| 0.3071 | 38000 | 3.0771 | - | - | - |
| 0.3152 | 39000 | 3.065 | - | - | - |
| 0.3232 | 40000 | 3.0589 | 2.9535 | 0.6885 | - |
| 0.3313 | 41000 | 3.0539 | - | - | - |
| 0.3394 | 42000 | 3.0211 | - | - | - |
| 0.3475 | 43000 | 3.0158 | - | - | - |
| 0.3556 | 44000 | 3.0172 | - | - | - |
| 0.3636 | 45000 | 2.9912 | - | - | - |
| 0.3717 | 46000 | 2.9776 | - | - | - |
| 0.3798 | 47000 | 2.9539 | - | - | - |
| 0.3879 | 48000 | 2.9753 | - | - | - |
| 0.3960 | 49000 | 2.9467 | - | - | - |
| 0.4040 | 50000 | 2.9429 | 2.8288 | 0.6830 | - |
| 0.4121 | 51000 | 2.9243 | - | - | - |
| 0.4202 | 52000 | 2.9273 | - | - | - |
| 0.4283 | 53000 | 2.9118 | - | - | - |
| 0.4364 | 54000 | 2.9068 | - | - | - |
| 0.4444 | 55000 | 2.8961 | - | - | - |
| 0.4525 | 56000 | 2.8621 | - | - | - |
| 0.4606 | 57000 | 2.8825 | - | - | - |
| 0.4687 | 58000 | 2.8466 | - | - | - |
| 0.4768 | 59000 | 2.868 | - | - | - |
| 0.4848 | 60000 | 2.8372 | 2.7335 | 0.6871 | - |
| 0.4929 | 61000 | 2.8322 | - | - | - |
| 0.5010 | 62000 | 2.8239 | - | - | - |
| 0.5091 | 63000 | 2.8148 | - | - | - |
| 0.5172 | 64000 | 2.8137 | - | - | - |
| 0.5253 | 65000 | 2.8043 | - | - | - |
| 0.5333 | 66000 | 2.7973 | - | - | - |
| 0.5414 | 67000 | 2.7739 | - | - | - |
| 0.5495 | 68000 | 2.7694 | - | - | - |
| 0.5576 | 69000 | 2.755 | - | - | - |
| 0.5657 | 70000 | 2.7846 | 2.6422 | 0.6773 | - |
| 0.5737 | 71000 | 2.7246 | - | - | - |
| 0.5818 | 72000 | 2.7438 | - | - | - |
| 0.5899 | 73000 | 2.7314 | - | - | - |
| 0.5980 | 74000 | 2.7213 | - | - | - |
| 0.6061 | 75000 | 2.7402 | - | - | - |
| 0.6141 | 76000 | 2.6955 | - | - | - |
| 0.6222 | 77000 | 2.7131 | - | - | - |
| 0.6303 | 78000 | 2.6951 | - | - | - |
| 0.6384 | 79000 | 2.6812 | - | - | - |
| 0.6465 | 80000 | 2.6844 | 2.5743 | 0.6827 | - |
| 0.6545 | 81000 | 2.665 | - | - | - |
| 0.6626 | 82000 | 2.6528 | - | - | - |
| 0.6707 | 83000 | 2.6819 | - | - | - |
| 0.6788 | 84000 | 2.6529 | - | - | - |
| 0.6869 | 85000 | 2.6665 | - | - | - |
| 0.6949 | 86000 | 2.6554 | - | - | - |
| 0.7030 | 87000 | 2.6299 | - | - | - |
| 0.7111 | 88000 | 2.659 | - | - | - |
| 0.7192 | 89000 | 2.632 | - | - | - |
| 0.7273 | 90000 | 2.6209 | 2.5051 | 0.6782 | - |
| 0.7354 | 91000 | 2.6023 | - | - | - |
| 0.7434 | 92000 | 2.6226 | - | - | - |
| 0.7515 | 93000 | 2.6057 | - | - | - |
| 0.7596 | 94000 | 2.601 | - | - | - |
| 0.7677 | 95000 | 2.5888 | - | - | - |
| 0.7758 | 96000 | 2.5811 | - | - | - |
| 0.7838 | 97000 | 2.565 | - | - | - |
| 0.7919 | 98000 | 2.5727 | - | - | - |
| 0.8 | 99000 | 2.5863 | - | - | - |
| 0.8081 | 100000 | 2.5534 | 2.4526 | 0.6799 | - |
| 0.8162 | 101000 | 2.5423 | - | - | - |
| 0.8242 | 102000 | 2.5655 | - | - | - |
| 0.8323 | 103000 | 2.5394 | - | - | - |
| 0.8404 | 104000 | 2.5217 | - | - | - |
| 0.8485 | 105000 | 2.5534 | - | - | - |
| 0.8566 | 106000 | 2.5264 | - | - | - |
| 0.8646 | 107000 | 2.5481 | - | - | - |
| 0.8727 | 108000 | 2.5508 | - | - | - |
| 0.8808 | 109000 | 2.5302 | - | - | - |
| 0.8889 | 110000 | 2.5223 | 2.4048 | 0.6771 | - |
| 0.8970 | 111000 | 2.5274 | - | - | - |
| 0.9051 | 112000 | 2.515 | - | - | - |
| 0.9131 | 113000 | 2.5088 | - | - | - |
| 0.9212 | 114000 | 2.5035 | - | - | - |
| 0.9293 | 115000 | 2.495 | - | - | - |
| 0.9374 | 116000 | 2.5066 | - | - | - |
| 0.9455 | 117000 | 2.4858 | - | - | - |
| 0.9535 | 118000 | 2.4803 | - | - | - |
| 0.9616 | 119000 | 2.506 | - | - | - |
| 0.9697 | 120000 | 2.4906 | 2.3738 | 0.6766 | - |
| 0.9778 | 121000 | 2.5027 | - | - | - |
| 0.9859 | 122000 | 2.4858 | - | - | - |
| 0.9939 | 123000 | 2.4928 | - | - | - |
| -1 | -1 | - | - | - | 0.6322 |
</details>
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 1.432 kWh
- **Carbon Emitted**: 0.557 kg of CO2
- **Hours Used**: 4.403 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 3.4.0.dev0
- Transformers: 4.48.0.dev0
- PyTorch: 2.5.0+cu121
- Accelerate: 0.35.0.dev0
- Datasets: 2.20.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### DenoisingAutoEncoderLoss
```bibtex
@inproceedings{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
pages = "671--688",
url = "https://arxiv.org/abs/2104.06979",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on google-bert/bert-base-uncased
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the [datasets-for-simcse](https://huggingface.co/datasets/princeton-nlp/datasets-for-simcse) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) <!-- at revision 86b5e0934494bd15c9632b12f734a8a67f723594 -->
- **Maximum Sequence Length:** 75 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [datasets-for-simcse](https://huggingface.co/datasets/princeton-nlp/datasets-for-simcse)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("tomaarsen/bert-base-uncased-stsb-tsdae")
# Run inference
sentences = [
'While the early models stayed close to their original form, eight subsequent generations varied substantially in size and styling.',
'While the stayed close their, eight generations varied substantially in size and',
'“ U ” cross of the river are a recent',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Datasets: `sts-dev` and `sts-test`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | sts-dev | sts-test |
|:--------------------|:-----------|:-----------|
| pearson_cosine | 0.6732 | 0.6425 |
| **spearman_cosine** | **0.6766** | **0.6322** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### datasets-for-simcse
* Dataset: [datasets-for-simcse](https://huggingface.co/datasets/princeton-nlp/datasets-for-simcse) at [e145e8b](https://huggingface.co/datasets/princeton-nlp/datasets-for-simcse/tree/e145e8bb659b2aa2669f32ef79cb4cdef6c58fef)
* Size: 1,000,000 training samples
* Columns: <code>text</code> and <code>noisy</code>
* Approximate statistics based on the first 1000 samples:
| | text | noisy |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 27.96 tokens</li><li>max: 75 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 17.68 tokens</li><li>max: 75 tokens</li></ul> |
* Samples:
| text | noisy |
|:---------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| <code>White was born in Iver, England.</code> | <code>White was born in Iver,</code> |
| <code>The common mangrove plants are "Rhizophora mucronata", "Sonneratia caseolaris", "Avicennia" spp., and "Aegiceras corniculatum".</code> | <code>plants are Rhizophora mucronata" "Sonneratia, spp.,".</code> |
| <code>H3K9ac and H3K14ac have been shown to be part of the active promoter state.</code> | <code>H3K9ac been part of active promoter state.</code> |
* Loss: [<code>DenoisingAutoEncoderLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#denoisingautoencoderloss)
### Evaluation Dataset
#### datasets-for-simcse
* Dataset: [datasets-for-simcse](https://huggingface.co/datasets/princeton-nlp/datasets-for-simcse) at [e145e8b](https://huggingface.co/datasets/princeton-nlp/datasets-for-simcse/tree/e145e8bb659b2aa2669f32ef79cb4cdef6c58fef)
* Size: 1,000,000 evaluation samples
* Columns: <code>text</code> and <code>noisy</code>
* Approximate statistics based on the first 1000 samples:
| | text | noisy |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 28.12 tokens</li><li>max: 75 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 17.79 tokens</li><li>max: 66 tokens</li></ul> |
* Samples:
| text | noisy |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Philippe Hervé (born 16 April 1959) is a French water polo player.</code> | <code>Philippe Hervé born April 1959 is French</code> |
| <code>lies at the very edge of Scottish offshore waters, close to the maritime boundary with Norway.</code> | <code>the edge Scottish offshore waters close to maritime boundary with Norway</code> |
| <code>The place is an exceptional example of the forced migration of convicts (Vinegar Hill rebels) and the development associated with punishment and reform, particularly convict labour and the associated coal mines.</code> | <code>The is an example of forced migration of convicts (Vinegar rebels and the development punishment and reform, particularly convict and the associated coal.</code> |
* Loss: [<code>DenoisingAutoEncoderLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#denoisingautoencoderloss)
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `learning_rate`: 3e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 3e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss | sts-dev_spearman_cosine | sts-test_spearman_cosine |
|:------:|:------:|:-------------:|:---------------:|:-----------------------:|:------------------------:|
| -1 | -1 | - | - | 0.3173 | - |
| 0.0081 | 1000 | 7.5472 | - | - | - |
| 0.0162 | 2000 | 6.0196 | - | - | - |
| 0.0242 | 3000 | 5.4872 | - | - | - |
| 0.0323 | 4000 | 5.1452 | - | - | - |
| 0.0404 | 5000 | 4.8099 | - | - | - |
| 0.0485 | 6000 | 4.5211 | - | - | - |
| 0.0566 | 7000 | 4.2967 | - | - | - |
| 0.0646 | 8000 | 4.1411 | - | - | - |
| 0.0727 | 9000 | 4.031 | - | - | - |
| 0.0808 | 10000 | 3.9636 | 3.8297 | 0.7237 | - |
| 0.0889 | 11000 | 3.9046 | - | - | - |
| 0.0970 | 12000 | 3.8138 | - | - | - |
| 0.1051 | 13000 | 3.7859 | - | - | - |
| 0.1131 | 14000 | 3.7237 | - | - | - |
| 0.1212 | 15000 | 3.6881 | - | - | - |
| 0.1293 | 16000 | 3.6133 | - | - | - |
| 0.1374 | 17000 | 3.5777 | - | - | - |
| 0.1455 | 18000 | 3.5285 | - | - | - |
| 0.1535 | 19000 | 3.4974 | - | - | - |
| 0.1616 | 20000 | 3.4421 | 3.3523 | 0.6978 | - |
| 0.1697 | 21000 | 3.416 | - | - | - |
| 0.1778 | 22000 | 3.4143 | - | - | - |
| 0.1859 | 23000 | 3.3661 | - | - | - |
| 0.1939 | 24000 | 3.3408 | - | - | - |
| 0.2020 | 25000 | 3.3079 | - | - | - |
| 0.2101 | 26000 | 3.2873 | - | - | - |
| 0.2182 | 27000 | 3.2639 | - | - | - |
| 0.2263 | 28000 | 3.2323 | - | - | - |
| 0.2343 | 29000 | 3.2416 | - | - | - |
| 0.2424 | 30000 | 3.2117 | 3.1015 | 0.6895 | - |
| 0.2505 | 31000 | 3.1868 | - | - | - |
| 0.2586 | 32000 | 3.1576 | - | - | - |
| 0.2667 | 33000 | 3.1619 | - | - | - |
| 0.2747 | 34000 | 3.1445 | - | - | - |
| 0.2828 | 35000 | 3.1387 | - | - | - |
| 0.2909 | 36000 | 3.1159 | - | - | - |
| 0.2990 | 37000 | 3.09 | - | - | - |
| 0.3071 | 38000 | 3.0771 | - | - | - |
| 0.3152 | 39000 | 3.065 | - | - | - |
| 0.3232 | 40000 | 3.0589 | 2.9535 | 0.6885 | - |
| 0.3313 | 41000 | 3.0539 | - | - | - |
| 0.3394 | 42000 | 3.0211 | - | - | - |
| 0.3475 | 43000 | 3.0158 | - | - | - |
| 0.3556 | 44000 | 3.0172 | - | - | - |
| 0.3636 | 45000 | 2.9912 | - | - | - |
| 0.3717 | 46000 | 2.9776 | - | - | - |
| 0.3798 | 47000 | 2.9539 | - | - | - |
| 0.3879 | 48000 | 2.9753 | - | - | - |
| 0.3960 | 49000 | 2.9467 | - | - | - |
| 0.4040 | 50000 | 2.9429 | 2.8288 | 0.6830 | - |
| 0.4121 | 51000 | 2.9243 | - | - | - |
| 0.4202 | 52000 | 2.9273 | - | - | - |
| 0.4283 | 53000 | 2.9118 | - | - | - |
| 0.4364 | 54000 | 2.9068 | - | - | - |
| 0.4444 | 55000 | 2.8961 | - | - | - |
| 0.4525 | 56000 | 2.8621 | - | - | - |
| 0.4606 | 57000 | 2.8825 | - | - | - |
| 0.4687 | 58000 | 2.8466 | - | - | - |
| 0.4768 | 59000 | 2.868 | - | - | - |
| 0.4848 | 60000 | 2.8372 | 2.7335 | 0.6871 | - |
| 0.4929 | 61000 | 2.8322 | - | - | - |
| 0.5010 | 62000 | 2.8239 | - | - | - |
| 0.5091 | 63000 | 2.8148 | - | - | - |
| 0.5172 | 64000 | 2.8137 | - | - | - |
| 0.5253 | 65000 | 2.8043 | - | - | - |
| 0.5333 | 66000 | 2.7973 | - | - | - |
| 0.5414 | 67000 | 2.7739 | - | - | - |
| 0.5495 | 68000 | 2.7694 | - | - | - |
| 0.5576 | 69000 | 2.755 | - | - | - |
| 0.5657 | 70000 | 2.7846 | 2.6422 | 0.6773 | - |
| 0.5737 | 71000 | 2.7246 | - | - | - |
| 0.5818 | 72000 | 2.7438 | - | - | - |
| 0.5899 | 73000 | 2.7314 | - | - | - |
| 0.5980 | 74000 | 2.7213 | - | - | - |
| 0.6061 | 75000 | 2.7402 | - | - | - |
| 0.6141 | 76000 | 2.6955 | - | - | - |
| 0.6222 | 77000 | 2.7131 | - | - | - |
| 0.6303 | 78000 | 2.6951 | - | - | - |
| 0.6384 | 79000 | 2.6812 | - | - | - |
| 0.6465 | 80000 | 2.6844 | 2.5743 | 0.6827 | - |
| 0.6545 | 81000 | 2.665 | - | - | - |
| 0.6626 | 82000 | 2.6528 | - | - | - |
| 0.6707 | 83000 | 2.6819 | - | - | - |
| 0.6788 | 84000 | 2.6529 | - | - | - |
| 0.6869 | 85000 | 2.6665 | - | - | - |
| 0.6949 | 86000 | 2.6554 | - | - | - |
| 0.7030 | 87000 | 2.6299 | - | - | - |
| 0.7111 | 88000 | 2.659 | - | - | - |
| 0.7192 | 89000 | 2.632 | - | - | - |
| 0.7273 | 90000 | 2.6209 | 2.5051 | 0.6782 | - |
| 0.7354 | 91000 | 2.6023 | - | - | - |
| 0.7434 | 92000 | 2.6226 | - | - | - |
| 0.7515 | 93000 | 2.6057 | - | - | - |
| 0.7596 | 94000 | 2.601 | - | - | - |
| 0.7677 | 95000 | 2.5888 | - | - | - |
| 0.7758 | 96000 | 2.5811 | - | - | - |
| 0.7838 | 97000 | 2.565 | - | - | - |
| 0.7919 | 98000 | 2.5727 | - | - | - |
| 0.8 | 99000 | 2.5863 | - | - | - |
| 0.8081 | 100000 | 2.5534 | 2.4526 | 0.6799 | - |
| 0.8162 | 101000 | 2.5423 | - | - | - |
| 0.8242 | 102000 | 2.5655 | - | - | - |
| 0.8323 | 103000 | 2.5394 | - | - | - |
| 0.8404 | 104000 | 2.5217 | - | - | - |
| 0.8485 | 105000 | 2.5534 | - | - | - |
| 0.8566 | 106000 | 2.5264 | - | - | - |
| 0.8646 | 107000 | 2.5481 | - | - | - |
| 0.8727 | 108000 | 2.5508 | - | - | - |
| 0.8808 | 109000 | 2.5302 | - | - | - |
| 0.8889 | 110000 | 2.5223 | 2.4048 | 0.6771 | - |
| 0.8970 | 111000 | 2.5274 | - | - | - |
| 0.9051 | 112000 | 2.515 | - | - | - |
| 0.9131 | 113000 | 2.5088 | - | - | - |
| 0.9212 | 114000 | 2.5035 | - | - | - |
| 0.9293 | 115000 | 2.495 | - | - | - |
| 0.9374 | 116000 | 2.5066 | - | - | - |
| 0.9455 | 117000 | 2.4858 | - | - | - |
| 0.9535 | 118000 | 2.4803 | - | - | - |
| 0.9616 | 119000 | 2.506 | - | - | - |
| 0.9697 | 120000 | 2.4906 | 2.3738 | 0.6766 | - |
| 0.9778 | 121000 | 2.5027 | - | - | - |
| 0.9859 | 122000 | 2.4858 | - | - | - |
| 0.9939 | 123000 | 2.4928 | - | - | - |
| -1 | -1 | - | - | - | 0.6322 |
</details>
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 1.432 kWh
- **Carbon Emitted**: 0.557 kg of CO2
- **Hours Used**: 4.403 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 3.4.0.dev0
- Transformers: 4.48.0.dev0
- PyTorch: 2.5.0+cu121
- Accelerate: 0.35.0.dev0
- Datasets: 2.20.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### DenoisingAutoEncoderLoss
```bibtex
@inproceedings{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
pages = "671--688",
url = "https://arxiv.org/abs/2104.06979",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "google-bert/bert-base-uncased", "datasets": ["princeton-nlp/datasets-for-simcse"], "library_name": "sentence-transformers", "metrics": ["pearson_cosine", "spearman_cosine"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:1000000", "loss:DenoisingAutoEncoderLoss"], "widget": [{"source_sentence": "He wound up homeless in the Mission District, playing for change in the streets.", "sentences": ["He wound up homeless, playing in streets", "It line-up of professional footballers,, firefighters and survivors.", "A (Dakota) belonging to the Dutch Air crash-landed near Beswick (Beswick Creek now Barunga?"]}, {"source_sentence": "The division remained near Arkhangelsk until the beginning of August, when it was shipped across the White Sea to Murmansk.", "sentences": ["The division remained near Arkhangelsk until the beginning of August, when it was shipped across White Sea to Murmansk.", "The building is and.", "Maxim Triesman born October) is politician banker trade union leader."]}, {"source_sentence": "\"Leper,\" the last song on the album, was left as an instrumental as Jourgensen had left the studio earlier than scheduled and did not care to write any lyrics.", "sentences": ["There produced the viral host cells processes, more suitable environment for viral replication transcription.", "As a the to", "Leper, the song on the album was left as an as Jourgensen had left the studio scheduled and did care to any lyrics"]}, {"source_sentence": "Prince and princess have given Gerda her their golden coach so she can continue her search for Kay.", "sentences": ["and princess given Gerda their golden coach so she can her search for Kay.", "handled the cinematography", "University Hoekstra was Professor of and Department of Multidisciplinary Water."]}, {"source_sentence": "While the early models stayed close to their original form, eight subsequent generations varied substantially in size and styling.", "sentences": ["While the stayed close their, eight generations varied substantially in size and", "Their influence, his's own tradition, his special organization all combined to divert the young into a political career", "“ U ” cross of the river are a recent"]}], "co2_eq_emissions": {"emissions": 556.5173349579181, "energy_consumed": 1.4317326253991955, "source": "codecarbon", "training_type": "fine-tuning", "on_cloud": false, "cpu_model": "13th Gen Intel(R) Core(TM) i7-13700K", "ram_total_size": 31.777088165283203, "hours_used": 4.403, "hardware_used": "1 x NVIDIA GeForce RTX 3090"}, "model-index": [{"name": "SentenceTransformer based on google-bert/bert-base-uncased", "results": [{"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "sts dev", "type": "sts-dev"}, "metrics": [{"type": "pearson_cosine", "value": 0.6732163313155011, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.6765812652563955, "name": "Spearman Cosine"}]}, {"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "sts test", "type": "sts-test"}, "metrics": [{"type": "pearson_cosine", "value": 0.6424591318281525, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.6322331484751982, "name": "Spearman Cosine"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION",
"SEMANTIC_SIMILARITY"
] | 42,536 |
neuralmagic/granite-3.1-2b-base-FP8-dynamic
|
neuralmagic
|
text-generation
|
[
"transformers",
"safetensors",
"granite",
"text-generation",
"fp8",
"vllm",
"en",
"base_model:ibm-granite/granite-3.1-2b-base",
"base_model:quantized:ibm-granite/granite-3.1-2b-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"compressed-tensors",
"region:us"
] | 2025-01-16T15:11:59Z |
2025-01-30T17:04:04+00:00
| 32 | 0 |
---
base_model: ibm-granite/granite-3.1-2b-base
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md
tags:
- fp8
- vllm
---
# granite-3.1-2b-base-FP8-dynamic
## Model Overview
- **Model Architecture:** granite-3.1-2b-base
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** FP8
- **Activation quantization:** FP8
- **Release Date:** 1/8/2025
- **Version:** 1.0
- **Model Developers:** Neural Magic
Quantized version of [ibm-granite/granite-3.1-2b-base](https://huggingface.co/ibm-granite/granite-3.1-2b-base).
It achieves an average score of 57.37 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 57.65.
### Model Optimizations
This model was obtained by quantizing the weights and activations of [ibm-granite/granite-3.1-2b-base](https://huggingface.co/ibm-granite/granite-3.1-2b-base) to FP8 data type, ready for inference with vLLM >= 0.5.2.
This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%. Only the weights and activations of the linear operators within transformers blocks are quantized.
## Deployment
### Use with vLLM
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
max_model_len, tp_size = 4096, 1
model_name = "neuralmagic/granite-3.1-2b-base-FP8-dynamic"
tokenizer = AutoTokenizer.from_pretrained(model_name)
llm = LLM(model=model_name, tensor_parallel_size=tp_size, max_model_len=max_model_len, trust_remote_code=True)
sampling_params = SamplingParams(temperature=0.3, max_tokens=256, stop_token_ids=[tokenizer.eos_token_id])
messages_list = [
[{"role": "user", "content": "Who are you? Please respond in pirate speak!"}],
]
prompt_token_ids = [tokenizer.apply_chat_template(messages, add_generation_prompt=True) for messages in messages_list]
outputs = llm.generate(prompt_token_ids=prompt_token_ids, sampling_params=sampling_params)
generated_text = [output.outputs[0].text for output in outputs]
print(generated_text)
```
vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
## Creation
This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
<details>
<summary>Model Creation Code</summary>
```bash
python quantize.py --model_id ibm-granite/granite-3.1-2b-base --save_path "output_dir/"
```
```python
import argparse
from transformers import AutoModelForCausalLM, AutoTokenizer
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor.transformers import oneshot
import os
def main():
parser = argparse.ArgumentParser(description='Quantize a transformer model to FP8')
parser.add_argument('--model_id', type=str, required=True,
help='The model ID from HuggingFace (e.g., "meta-llama/Meta-Llama-3-2b-base")')
parser.add_argument('--save_path', type=str, default='.',
help='Custom path to save the quantized model. If not provided, will use model_name-FP8-dynamic')
args = parser.parse_args()
# Load model
model = AutoModelForCausalLM.from_pretrained(
args.model_id, device_map="auto", torch_dtype="auto", trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(args.model_id)
# Configure the quantization algorithm and scheme
recipe = QuantizationModifier(
targets="Linear", scheme="FP8_DYNAMIC", ignore=["lm_head"]
)
# Apply quantization
oneshot(model=model, recipe=recipe)
save_path = os.path.join(args.save_path, args.model_id.split("/")[1] + "-FP8-dynamic")
os.makedirs(save_path, exist_ok=True)
# Save to disk in compressed-tensors format
model.save_pretrained(save_path)
tokenizer.save_pretrained(save_path)
print(f"Model and tokenizer saved to: {save_path}")
if __name__ == "__main__":
main()
```
</details>
## Evaluation
The model was evaluated on OpenLLM Leaderboard [V1](https://huggingface.co/spaces/open-llm-leaderboard-old/open_llm_leaderboard), OpenLLM Leaderboard [V2](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/) and on [HumanEval](https://github.com/neuralmagic/evalplus), using the following commands:
<details>
<summary>Evaluation Commands</summary>
OpenLLM Leaderboard V1:
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/granite-3.1-2b-base-FP8-dynamic",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \
--tasks openllm \
--write_out \
--batch_size auto \
--output_path output_dir \
--show_config
```
#### HumanEval
##### Generation
```
python3 codegen/generate.py \
--model neuralmagic/granite-3.1-2b-base-FP8-dynamic \
--bs 16 \
--temperature 0.2 \
--n_samples 50 \
--root "." \
--dataset humaneval
```
##### Sanitization
```
python3 evalplus/sanitize.py \
humaneval/neuralmagic--granite-3.1-2b-base-FP8-dynamic_vllm_temp_0.2
```
##### Evaluation
```
evalplus.evaluate \
--dataset humaneval \
--samples humaneval/neuralmagic--granite-3.1-2b-base-FP8-dynamic_vllm_temp_0.2-sanitized
```
</details>
### Accuracy
<table>
<thead>
<tr>
<th>Category</th>
<th>Metric</th>
<th>ibm-granite/granite-3.1-2b-base</th>
<th>neuralmagic/granite-3.1-2b-base-FP8-dynamic</th>
<th>Recovery (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="7"><b>OpenLLM V1</b></td>
<td>ARC-Challenge (Acc-Norm, 25-shot)</td>
<td>53.75</td>
<td>53.50</td>
<td>99.54</td>
</tr>
<tr>
<td>GSM8K (Strict-Match, 5-shot)</td>
<td>47.84</td>
<td>46.10</td>
<td>96.36</td>
</tr>
<tr>
<td>HellaSwag (Acc-Norm, 10-shot)</td>
<td>77.94</td>
<td>77.76</td>
<td>99.77</td>
</tr>
<tr>
<td>MMLU (Acc, 5-shot)</td>
<td>52.88</td>
<td>52.61</td>
<td>99.49</td>
</tr>
<tr>
<td>TruthfulQA (MC2, 0-shot)</td>
<td>39.04</td>
<td>39.84</td>
<td>102.05</td>
</tr>
<tr>
<td>Winogrande (Acc, 5-shot)</td>
<td>74.43</td>
<td>74.43</td>
<td>100.00</td>
</tr>
<tr>
<td><b>Average Score</b></td>
<td><b>57.65</b></td>
<td><b>57.37</b></td>
<td><b>99.52</b></td>
</tr>
<tr>
<td rowspan="2"><b>Coding</b></td>
<td>HumanEval Pass@1</td>
<td>30.00</td>
<td>30.40</td>
<td>101.33</td>
</tr>
</tbody>
</table>
## Inference Performance
This model achieves up to 1.2x speedup in single-stream deployment on L40 GPUs.
The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.6.6.post1, and [GuideLLM](https://github.com/neuralmagic/guidellm).
<details>
<summary>Benchmarking Command</summary>
```
guidellm --model neuralmagic/granite-3.1-2b-base-FP8-dynamic --target "http://localhost:8000/v1" --data-type emulated --data "prompt_tokens=<prompt_tokens>,generated_tokens=<generated_tokens>" --max seconds 360 --backend aiohttp_server
```
</details>
### Single-stream performance (measured with vLLM version 0.6.6.post1)
<table>
<tr>
<td></td>
<td></td>
<td></td>
<th style="text-align: center;" colspan="7" >Latency (s)</th>
</tr>
<tr>
<th>GPU class</th>
<th>Model</th>
<th>Speedup</th>
<th>Code Completion<br>prefill: 256 tokens<br>decode: 1024 tokens</th>
<th>Docstring Generation<br>prefill: 768 tokens<br>decode: 128 tokens</th>
<th>Code Fixing<br>prefill: 1024 tokens<br>decode: 1024 tokens</th>
<th>RAG<br>prefill: 1024 tokens<br>decode: 128 tokens</th>
<th>Instruction Following<br>prefill: 256 tokens<br>decode: 128 tokens</th>
<th>Multi-turn Chat<br>prefill: 512 tokens<br>decode: 256 tokens</th>
<th>Large Summarization<br>prefill: 4096 tokens<br>decode: 512 tokens</th>
</tr>
<tr>
<td style="vertical-align: middle;" rowspan="3" >L40</td>
<td>granite-3.1-2b-base</td>
<td></td>
<td>9.3</td>
<td>1.2</td>
<td>9.4</td>
<td>1.2</td>
<td>1.2</td>
<td>2.3</td>
<td>5.0</td>
</tr>
<tr>
<td>granite-3.1-2b-base-FP8-dynamic<br>(this model)</td>
<td>1.26</td>
<td>7.3</td>
<td>0.9</td>
<td>7.4</td>
<td>1.0</td>
<td>0.9</td>
<td>1.8</td>
<td>4.1</td>
</tr>
<tr>
<td>granite-3.1-2b-base-quantized.w4a16</td>
<td>1.88</td>
<td>4.8</td>
<td>0.6</td>
<td>4.9</td>
<td>0.6</td>
<td>0.6</td>
<td>1.2</td>
<td>2.8</td>
</tr>
</table>
| null |
Non_BioNLP
|
# granite-3.1-2b-base-FP8-dynamic
## Model Overview
- **Model Architecture:** granite-3.1-2b-base
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** FP8
- **Activation quantization:** FP8
- **Release Date:** 1/8/2025
- **Version:** 1.0
- **Model Developers:** Neural Magic
Quantized version of [ibm-granite/granite-3.1-2b-base](https://huggingface.co/ibm-granite/granite-3.1-2b-base).
It achieves an average score of 57.37 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 57.65.
### Model Optimizations
This model was obtained by quantizing the weights and activations of [ibm-granite/granite-3.1-2b-base](https://huggingface.co/ibm-granite/granite-3.1-2b-base) to FP8 data type, ready for inference with vLLM >= 0.5.2.
This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%. Only the weights and activations of the linear operators within transformers blocks are quantized.
## Deployment
### Use with vLLM
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
max_model_len, tp_size = 4096, 1
model_name = "neuralmagic/granite-3.1-2b-base-FP8-dynamic"
tokenizer = AutoTokenizer.from_pretrained(model_name)
llm = LLM(model=model_name, tensor_parallel_size=tp_size, max_model_len=max_model_len, trust_remote_code=True)
sampling_params = SamplingParams(temperature=0.3, max_tokens=256, stop_token_ids=[tokenizer.eos_token_id])
messages_list = [
[{"role": "user", "content": "Who are you? Please respond in pirate speak!"}],
]
prompt_token_ids = [tokenizer.apply_chat_template(messages, add_generation_prompt=True) for messages in messages_list]
outputs = llm.generate(prompt_token_ids=prompt_token_ids, sampling_params=sampling_params)
generated_text = [output.outputs[0].text for output in outputs]
print(generated_text)
```
vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
## Creation
This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
<details>
<summary>Model Creation Code</summary>
```bash
python quantize.py --model_id ibm-granite/granite-3.1-2b-base --save_path "output_dir/"
```
```python
import argparse
from transformers import AutoModelForCausalLM, AutoTokenizer
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor.transformers import oneshot
import os
def main():
parser = argparse.ArgumentParser(description='Quantize a transformer model to FP8')
parser.add_argument('--model_id', type=str, required=True,
help='The model ID from HuggingFace (e.g., "meta-llama/Meta-Llama-3-2b-base")')
parser.add_argument('--save_path', type=str, default='.',
help='Custom path to save the quantized model. If not provided, will use model_name-FP8-dynamic')
args = parser.parse_args()
# Load model
model = AutoModelForCausalLM.from_pretrained(
args.model_id, device_map="auto", torch_dtype="auto", trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(args.model_id)
# Configure the quantization algorithm and scheme
recipe = QuantizationModifier(
targets="Linear", scheme="FP8_DYNAMIC", ignore=["lm_head"]
)
# Apply quantization
oneshot(model=model, recipe=recipe)
save_path = os.path.join(args.save_path, args.model_id.split("/")[1] + "-FP8-dynamic")
os.makedirs(save_path, exist_ok=True)
# Save to disk in compressed-tensors format
model.save_pretrained(save_path)
tokenizer.save_pretrained(save_path)
print(f"Model and tokenizer saved to: {save_path}")
if __name__ == "__main__":
main()
```
</details>
## Evaluation
The model was evaluated on OpenLLM Leaderboard [V1](https://huggingface.co/spaces/open-llm-leaderboard-old/open_llm_leaderboard), OpenLLM Leaderboard [V2](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/) and on [HumanEval](https://github.com/neuralmagic/evalplus), using the following commands:
<details>
<summary>Evaluation Commands</summary>
OpenLLM Leaderboard V1:
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/granite-3.1-2b-base-FP8-dynamic",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \
--tasks openllm \
--write_out \
--batch_size auto \
--output_path output_dir \
--show_config
```
#### HumanEval
##### Generation
```
python3 codegen/generate.py \
--model neuralmagic/granite-3.1-2b-base-FP8-dynamic \
--bs 16 \
--temperature 0.2 \
--n_samples 50 \
--root "." \
--dataset humaneval
```
##### Sanitization
```
python3 evalplus/sanitize.py \
humaneval/neuralmagic--granite-3.1-2b-base-FP8-dynamic_vllm_temp_0.2
```
##### Evaluation
```
evalplus.evaluate \
--dataset humaneval \
--samples humaneval/neuralmagic--granite-3.1-2b-base-FP8-dynamic_vllm_temp_0.2-sanitized
```
</details>
### Accuracy
<table>
<thead>
<tr>
<th>Category</th>
<th>Metric</th>
<th>ibm-granite/granite-3.1-2b-base</th>
<th>neuralmagic/granite-3.1-2b-base-FP8-dynamic</th>
<th>Recovery (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="7"><b>OpenLLM V1</b></td>
<td>ARC-Challenge (Acc-Norm, 25-shot)</td>
<td>53.75</td>
<td>53.50</td>
<td>99.54</td>
</tr>
<tr>
<td>GSM8K (Strict-Match, 5-shot)</td>
<td>47.84</td>
<td>46.10</td>
<td>96.36</td>
</tr>
<tr>
<td>HellaSwag (Acc-Norm, 10-shot)</td>
<td>77.94</td>
<td>77.76</td>
<td>99.77</td>
</tr>
<tr>
<td>MMLU (Acc, 5-shot)</td>
<td>52.88</td>
<td>52.61</td>
<td>99.49</td>
</tr>
<tr>
<td>TruthfulQA (MC2, 0-shot)</td>
<td>39.04</td>
<td>39.84</td>
<td>102.05</td>
</tr>
<tr>
<td>Winogrande (Acc, 5-shot)</td>
<td>74.43</td>
<td>74.43</td>
<td>100.00</td>
</tr>
<tr>
<td><b>Average Score</b></td>
<td><b>57.65</b></td>
<td><b>57.37</b></td>
<td><b>99.52</b></td>
</tr>
<tr>
<td rowspan="2"><b>Coding</b></td>
<td>HumanEval Pass@1</td>
<td>30.00</td>
<td>30.40</td>
<td>101.33</td>
</tr>
</tbody>
</table>
## Inference Performance
This model achieves up to 1.2x speedup in single-stream deployment on L40 GPUs.
The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.6.6.post1, and [GuideLLM](https://github.com/neuralmagic/guidellm).
<details>
<summary>Benchmarking Command</summary>
```
guidellm --model neuralmagic/granite-3.1-2b-base-FP8-dynamic --target "http://localhost:8000/v1" --data-type emulated --data "prompt_tokens=<prompt_tokens>,generated_tokens=<generated_tokens>" --max seconds 360 --backend aiohttp_server
```
</details>
### Single-stream performance (measured with vLLM version 0.6.6.post1)
<table>
<tr>
<td></td>
<td></td>
<td></td>
<th style="text-align: center;" colspan="7" >Latency (s)</th>
</tr>
<tr>
<th>GPU class</th>
<th>Model</th>
<th>Speedup</th>
<th>Code Completion<br>prefill: 256 tokens<br>decode: 1024 tokens</th>
<th>Docstring Generation<br>prefill: 768 tokens<br>decode: 128 tokens</th>
<th>Code Fixing<br>prefill: 1024 tokens<br>decode: 1024 tokens</th>
<th>RAG<br>prefill: 1024 tokens<br>decode: 128 tokens</th>
<th>Instruction Following<br>prefill: 256 tokens<br>decode: 128 tokens</th>
<th>Multi-turn Chat<br>prefill: 512 tokens<br>decode: 256 tokens</th>
<th>Large Summarization<br>prefill: 4096 tokens<br>decode: 512 tokens</th>
</tr>
<tr>
<td style="vertical-align: middle;" rowspan="3" >L40</td>
<td>granite-3.1-2b-base</td>
<td></td>
<td>9.3</td>
<td>1.2</td>
<td>9.4</td>
<td>1.2</td>
<td>1.2</td>
<td>2.3</td>
<td>5.0</td>
</tr>
<tr>
<td>granite-3.1-2b-base-FP8-dynamic<br>(this model)</td>
<td>1.26</td>
<td>7.3</td>
<td>0.9</td>
<td>7.4</td>
<td>1.0</td>
<td>0.9</td>
<td>1.8</td>
<td>4.1</td>
</tr>
<tr>
<td>granite-3.1-2b-base-quantized.w4a16</td>
<td>1.88</td>
<td>4.8</td>
<td>0.6</td>
<td>4.9</td>
<td>0.6</td>
<td>0.6</td>
<td>1.2</td>
<td>2.8</td>
</tr>
</table>
|
{"base_model": "ibm-granite/granite-3.1-2b-base", "language": ["en"], "library_name": "transformers", "license": "apache-2.0", "license_link": "https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md", "tags": ["fp8", "vllm"]}
|
task
|
[
"SUMMARIZATION"
] | 42,537 |
MultiBertGunjanPatrick/multiberts-seed-3-1400k
|
MultiBertGunjanPatrick
| null |
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2021-10-04T05:08:37+00:00
| 104 | 0 |
---
datasets:
- bookcorpus
- wikipedia
language: en
license: apache-2.0
tags:
- exbert
- multiberts
- multiberts-seed-3
---
# MultiBERTs Seed 3 Checkpoint 1400k (uncased)
Seed 3 intermediate checkpoint 1400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-1400k')
model = BertModel.from_pretrained("multiberts-seed-3-1400k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| null |
Non_BioNLP
|
# MultiBERTs Seed 3 Checkpoint 1400k (uncased)
Seed 3 intermediate checkpoint 1400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-1400k')
model = BertModel.from_pretrained("multiberts-seed-3-1400k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
{"datasets": ["bookcorpus", "wikipedia"], "language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-3"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,538 |
TransferGraph/vesteinn_XLMR-ENIS-finetuned-cola-finetuned-lora-tweet_eval_hate
|
TransferGraph
|
text-classification
|
[
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:vesteinn/XLMR-ENIS-finetuned-cola",
"base_model:adapter:vesteinn/XLMR-ENIS-finetuned-cola",
"license:agpl-3.0",
"model-index",
"region:us"
] | 2024-02-29T13:40:21Z |
2024-02-29T13:40:24+00:00
| 1 | 0 |
---
base_model: vesteinn/XLMR-ENIS-finetuned-cola
datasets:
- tweet_eval
library_name: peft
license: agpl-3.0
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: vesteinn_XLMR-ENIS-finetuned-cola-finetuned-lora-tweet_eval_hate
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: hate
split: validation
args: hate
metrics:
- type: accuracy
value: 0.731
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vesteinn_XLMR-ENIS-finetuned-cola-finetuned-lora-tweet_eval_hate
This model is a fine-tuned version of [vesteinn/XLMR-ENIS-finetuned-cola](https://huggingface.co/vesteinn/XLMR-ENIS-finetuned-cola) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.731
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.445 | None | 0 |
| 0.7 | 0.6206 | 0 |
| 0.719 | 0.5166 | 1 |
| 0.708 | 0.4655 | 2 |
| 0.731 | 0.4467 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
| null |
TBD
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vesteinn_XLMR-ENIS-finetuned-cola-finetuned-lora-tweet_eval_hate
This model is a fine-tuned version of [vesteinn/XLMR-ENIS-finetuned-cola](https://huggingface.co/vesteinn/XLMR-ENIS-finetuned-cola) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.731
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.445 | None | 0 |
| 0.7 | 0.6206 | 0 |
| 0.719 | 0.5166 | 1 |
| 0.708 | 0.4655 | 2 |
| 0.731 | 0.4467 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
|
{"base_model": "vesteinn/XLMR-ENIS-finetuned-cola", "datasets": ["tweet_eval"], "library_name": "peft", "license": "agpl-3.0", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "vesteinn_XLMR-ENIS-finetuned-cola-finetuned-lora-tweet_eval_hate", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "config": "hate", "split": "validation", "args": "hate"}, "metrics": [{"type": "accuracy", "value": 0.731, "name": "accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,539 |
haonan-li/bactrian-es-bloom-7b1-lora
|
haonan-li
| null |
[
"arxiv:2305.15011",
"license:mit",
"region:us"
] | 2023-06-13T13:30:07Z |
2023-06-13T13:30:20+00:00
| 0 | 0 |
---
license: mit
---
This repo contains a low-rank adapter (LoRA) for BLOOM-7b1
fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca)
and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in Spanish.
### Dataset Creation
1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data).
2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023).
3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023).
<h3 align="center">
<img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center">
</h3>
### Training Parameters
The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora).
This version of the weights was trained with the following hyperparameters:
- Epochs: 8
- Batch size: 128
- Cutoff length: 1024
- Learning rate: 3e-4
- Lora _r_: 16
- Lora target modules: query_key_value
That is:
```
python finetune.py \
--base_model='bigscience/bloom-7b1' \
--num_epochs=5 \
--cutoff_len=1024 \
--group_by_length \
--output_dir='./bactrian-es-bloom-7b1-lora' \
--lora_target_modules='query_key_value' \
--lora_r=16 \
--micro_batch_size=32
```
Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X.
### Discussion of Biases
(1) Translation bias; (2) Potential English-culture bias in the translated dataset.
### Citation Information
```
@misc{li2023bactrianx,
title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation},
author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin},
year={2023},
eprint={2305.15011},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| null |
Non_BioNLP
|
This repo contains a low-rank adapter (LoRA) for BLOOM-7b1
fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca)
and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in Spanish.
### Dataset Creation
1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data).
2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023).
3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023).
<h3 align="center">
<img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center">
</h3>
### Training Parameters
The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora).
This version of the weights was trained with the following hyperparameters:
- Epochs: 8
- Batch size: 128
- Cutoff length: 1024
- Learning rate: 3e-4
- Lora _r_: 16
- Lora target modules: query_key_value
That is:
```
python finetune.py \
--base_model='bigscience/bloom-7b1' \
--num_epochs=5 \
--cutoff_len=1024 \
--group_by_length \
--output_dir='./bactrian-es-bloom-7b1-lora' \
--lora_target_modules='query_key_value' \
--lora_r=16 \
--micro_batch_size=32
```
Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X.
### Discussion of Biases
(1) Translation bias; (2) Potential English-culture bias in the translated dataset.
### Citation Information
```
@misc{li2023bactrianx,
title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation},
author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin},
year={2023},
eprint={2305.15011},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"license": "mit"}
|
task
|
[
"TRANSLATION"
] | 42,540 |
Lots-of-LoRAs/Mistral-7B-Instruct-v0.2-4b-r16-task558
|
Lots-of-LoRAs
| null |
[
"pytorch",
"safetensors",
"en",
"arxiv:1910.09700",
"arxiv:2407.00066",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:mit",
"region:us"
] | 2025-01-01T13:41:04Z |
2025-01-01T13:41:09+00:00
| 0 | 0 |
---
base_model: mistralai/Mistral-7B-Instruct-v0.2
language: en
library_name: pytorch
license: mit
---
# Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task558
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
LoRA trained on task558_alt_translation_en_hi
- **Developed by:** bruel
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** LoRA
- **Language(s) (NLP):** en
- **License:** mit
- **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/bruel-gabrielsson
- **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
https://huggingface.co/datasets/Lots-of-LoRAs/task558_alt_translation_en_hi sourced from https://github.com/allenai/natural-instructions
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
@misc{brüelgabrielsson2024compressserveservingthousands,
title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead},
author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon},
year={2024},
eprint={2407.00066},
archivePrefix={arXiv},
primaryClass={cs.DC},
url={https://arxiv.org/abs/2407.00066},
}
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| null |
Non_BioNLP
|
# Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task558
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
LoRA trained on task558_alt_translation_en_hi
- **Developed by:** bruel
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** LoRA
- **Language(s) (NLP):** en
- **License:** mit
- **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/bruel-gabrielsson
- **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
https://huggingface.co/datasets/Lots-of-LoRAs/task558_alt_translation_en_hi sourced from https://github.com/allenai/natural-instructions
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
@misc{brüelgabrielsson2024compressserveservingthousands,
title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead},
author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon},
year={2024},
eprint={2407.00066},
archivePrefix={arXiv},
primaryClass={cs.DC},
url={https://arxiv.org/abs/2407.00066},
}
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"base_model": "mistralai/Mistral-7B-Instruct-v0.2", "language": "en", "library_name": "pytorch", "license": "mit"}
|
task
|
[
"TRANSLATION"
] | 42,542 |
cstr/Spaetzle-v8-7b
|
cstr
|
text-generation
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"flemmingmiguel/NeuDist-Ro-7B",
"johannhartmann/Brezn3",
"ResplendentAI/Flora_DPO_7B",
"conversational",
"de",
"en",
"base_model:ResplendentAI/Flora_DPO_7B",
"base_model:merge:ResplendentAI/Flora_DPO_7B",
"base_model:flemmingmiguel/NeuDist-Ro-7B",
"base_model:merge:flemmingmiguel/NeuDist-Ro-7B",
"base_model:johannhartmann/Brezn3",
"base_model:merge:johannhartmann/Brezn3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-03-10T18:50:38Z |
2024-04-18T09:56:43+00:00
| 29 | 2 |
---
base_model:
- flemmingmiguel/NeuDist-Ro-7B
- johannhartmann/Brezn3
- ResplendentAI/Flora_DPO_7B
language:
- de
- en
tags:
- merge
- mergekit
- lazymergekit
- flemmingmiguel/NeuDist-Ro-7B
- johannhartmann/Brezn3
- ResplendentAI/Flora_DPO_7B
---
# Spaetzle-v8-7b
This model is supposed to show adequate performance in German and English on a number of tasks, while mostly behaving well, that is, without rambling on, intermixing tokens from different templates in training and adapting, etc.
It is mostly a quick test, and considerably weaker in German grammar and orthography than DiscoLM e.g., but for use cases where this is not too important, but e.g. instruction following, reasoning, etc, it might actually be a little bit preferable.
It is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [flemmingmiguel/NeuDist-Ro-7B](https://huggingface.co/flemmingmiguel/NeuDist-Ro-7B)
* [johannhartmann/Brezn3](https://huggingface.co/johannhartmann/Brezn3)
* [ResplendentAI/Flora_DPO_7B](https://huggingface.co/ResplendentAI/Flora_DPO_7B)
* on the basis of [mayflowergmbh/Wiedervereinigung-7b-dpo-laser](https://huggingface.co/mayflowergmbh/Wiedervereinigung-7b-dpo-laser)
All credits are due to the creators of those original models and the training datasets involved.
For a suitable quantized version, try [cstr/Spaetzle-v8-7b-GGUF](https://huggingface.co/cstr/Spaetzle-v8-7b-GGUF)
## Evaluation
[Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_cstr__Spaetzle-v8-7b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |72.27|
|AI2 Reasoning Challenge (25-Shot)|68.69|
|HellaSwag (10-Shot) |86.68|
|MMLU (5-Shot) |64.60|
|TruthfulQA (0-shot) |64.05|
|Winogrande (5-shot) |81.45|
|GSM8k (5-shot) |68.16|
EQ-Bench (v2_de): 61.04 / english (v2): 78.3
[ScandEval](https://scandeval.com/german-nlg/) 12.5.2 scores
| Benchmark | Spaetzle-v8-7b Value |
|-----------------------|----------------------------------------------------|
| Model ID | cstr/Spaetzle-v8-7b (few-shot, val) |
| Parameters | 7242 |
| Vocabulary Size | 32 |
| Context | 32768 |
| Commercial | False |
| Speed | 5,980 ± 1,031 / 1,714 ± 552 |
| Rank | 1.85 |
| GermEval | 58.90 ± 2.30 / 45.55 ± 3.30 |
| SB10k | 61.34 ± 1.90 / 72.98 ± 1.30 |
| ScaLA-De | 31.58 ± 4.39 / 65.51 ± 2.23 |
| GermanQuAD | 24.91 ± 3.98 / 60.88 ± 3.31 |
| MLSum | 67.25 ± 1.06 / 22.95 ± 2.64 |
| MMLU-De | 34.62 ± 2.20 / 50.43 ± 1.52 |
| HellaSwag-De | 48.70 ± 2.47 / 61.05 ± 1.79 |
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[Spaetzle-v8-7b](https://huggingface.co/cstr/Spaetzle-v8-7b)| 45.31| 75.69| 63.94| 45.57| 57.63|
### AGIEval
| Task |Version| Metric |Value| |Stderr|
|------------------------------|------:|--------|----:|---|-----:|
|agieval_aqua_rat | 0|acc |25.59|± | 2.74|
| | |acc_norm|24.80|± | 2.72|
|agieval_logiqa_en | 0|acc |39.63|± | 1.92|
| | |acc_norm|39.78|± | 1.92|
|agieval_lsat_ar | 0|acc |23.48|± | 2.80|
| | |acc_norm|24.35|± | 2.84|
|agieval_lsat_lr | 0|acc |50.98|± | 2.22|
| | |acc_norm|51.96|± | 2.21|
|agieval_lsat_rc | 0|acc |62.08|± | 2.96|
| | |acc_norm|62.83|± | 2.95|
|agieval_sat_en | 0|acc |78.64|± | 2.86|
| | |acc_norm|79.13|± | 2.84|
|agieval_sat_en_without_passage| 0|acc |44.66|± | 3.47|
| | |acc_norm|44.66|± | 3.47|
|agieval_sat_math | 0|acc |37.27|± | 3.27|
| | |acc_norm|35.00|± | 3.22|
Average: 45.31%
### GPT4All
| Task |Version| Metric |Value| |Stderr|
|-------------|------:|--------|----:|---|-----:|
|arc_challenge| 0|acc |63.14|± | 1.41|
| | |acc_norm|64.51|± | 1.40|
|arc_easy | 0|acc |85.98|± | 0.71|
| | |acc_norm|82.49|± | 0.78|
|boolq | 1|acc |88.10|± | 0.57|
|hellaswag | 0|acc |66.31|± | 0.47|
| | |acc_norm|85.17|± | 0.35|
|openbookqa | 0|acc |38.00|± | 2.17|
| | |acc_norm|47.20|± | 2.23|
|piqa | 0|acc |83.35|± | 0.87|
| | |acc_norm|84.17|± | 0.85|
|winogrande | 0|acc |78.22|± | 1.16|
Average: 75.69%
### TruthfulQA
| Task |Version|Metric|Value| |Stderr|
|-------------|------:|------|----:|---|-----:|
|truthfulqa_mc| 1|mc1 |47.74|± | 1.75|
| | |mc2 |63.94|± | 1.53|
Average: 63.94%
### Bigbench
| Task |Version| Metric |Value| |Stderr|
|------------------------------------------------|------:|---------------------|----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|56.84|± | 3.60|
|bigbench_date_understanding | 0|multiple_choice_grade|66.12|± | 2.47|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|41.47|± | 3.07|
|bigbench_geometric_shapes | 0|multiple_choice_grade|22.01|± | 2.19|
| | |exact_str_match | 0.00|± | 0.00|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|31.40|± | 2.08|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|23.14|± | 1.60|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|56.00|± | 2.87|
|bigbench_movie_recommendation | 0|multiple_choice_grade|45.00|± | 2.23|
|bigbench_navigate | 0|multiple_choice_grade|50.70|± | 1.58|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|70.05|± | 1.02|
|bigbench_ruin_names | 0|multiple_choice_grade|45.54|± | 2.36|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|26.05|± | 1.39|
|bigbench_snarks | 0|multiple_choice_grade|71.82|± | 3.35|
|bigbench_sports_understanding | 0|multiple_choice_grade|72.92|± | 1.42|
|bigbench_temporal_sequences | 0|multiple_choice_grade|44.20|± | 1.57|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|22.80|± | 1.19|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|18.23|± | 0.92|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|56.00|± | 2.87|
Average: 45.57%
Average score: 57.63%
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "cstr/Spaetzle-v8-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## 🧩 Configuration
The model uses ChatML and should work well with this (as it is merged from models which (mostly) saw ChatML templates in training).
```yaml
models:
- model: mayflowergmbh/Wiedervereinigung-7b-dpo-laser
# no parameters necessary for base model
- model: flemmingmiguel/NeuDist-Ro-7B
parameters:
density: 0.60
weight: 0.30
- model: johannhartmann/Brezn3
parameters:
density: 0.65
weight: 0.40
- model: ResplendentAI/Flora_DPO_7B
parameters:
density: 0.6
weight: 0.3
merge_method: dare_ties
base_model: mayflowergmbh/Wiedervereinigung-7b-dpo-laser
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
tokenizer_source: base
```
| null |
Non_BioNLP
|
# Spaetzle-v8-7b
This model is supposed to show adequate performance in German and English on a number of tasks, while mostly behaving well, that is, without rambling on, intermixing tokens from different templates in training and adapting, etc.
It is mostly a quick test, and considerably weaker in German grammar and orthography than DiscoLM e.g., but for use cases where this is not too important, but e.g. instruction following, reasoning, etc, it might actually be a little bit preferable.
It is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [flemmingmiguel/NeuDist-Ro-7B](https://huggingface.co/flemmingmiguel/NeuDist-Ro-7B)
* [johannhartmann/Brezn3](https://huggingface.co/johannhartmann/Brezn3)
* [ResplendentAI/Flora_DPO_7B](https://huggingface.co/ResplendentAI/Flora_DPO_7B)
* on the basis of [mayflowergmbh/Wiedervereinigung-7b-dpo-laser](https://huggingface.co/mayflowergmbh/Wiedervereinigung-7b-dpo-laser)
All credits are due to the creators of those original models and the training datasets involved.
For a suitable quantized version, try [cstr/Spaetzle-v8-7b-GGUF](https://huggingface.co/cstr/Spaetzle-v8-7b-GGUF)
## Evaluation
[Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_cstr__Spaetzle-v8-7b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |72.27|
|AI2 Reasoning Challenge (25-Shot)|68.69|
|HellaSwag (10-Shot) |86.68|
|MMLU (5-Shot) |64.60|
|TruthfulQA (0-shot) |64.05|
|Winogrande (5-shot) |81.45|
|GSM8k (5-shot) |68.16|
EQ-Bench (v2_de): 61.04 / english (v2): 78.3
[ScandEval](https://scandeval.com/german-nlg/) 12.5.2 scores
| Benchmark | Spaetzle-v8-7b Value |
|-----------------------|----------------------------------------------------|
| Model ID | cstr/Spaetzle-v8-7b (few-shot, val) |
| Parameters | 7242 |
| Vocabulary Size | 32 |
| Context | 32768 |
| Commercial | False |
| Speed | 5,980 ± 1,031 / 1,714 ± 552 |
| Rank | 1.85 |
| GermEval | 58.90 ± 2.30 / 45.55 ± 3.30 |
| SB10k | 61.34 ± 1.90 / 72.98 ± 1.30 |
| ScaLA-De | 31.58 ± 4.39 / 65.51 ± 2.23 |
| GermanQuAD | 24.91 ± 3.98 / 60.88 ± 3.31 |
| MLSum | 67.25 ± 1.06 / 22.95 ± 2.64 |
| MMLU-De | 34.62 ± 2.20 / 50.43 ± 1.52 |
| HellaSwag-De | 48.70 ± 2.47 / 61.05 ± 1.79 |
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[Spaetzle-v8-7b](https://huggingface.co/cstr/Spaetzle-v8-7b)| 45.31| 75.69| 63.94| 45.57| 57.63|
### AGIEval
| Task |Version| Metric |Value| |Stderr|
|------------------------------|------:|--------|----:|---|-----:|
|agieval_aqua_rat | 0|acc |25.59|± | 2.74|
| | |acc_norm|24.80|± | 2.72|
|agieval_logiqa_en | 0|acc |39.63|± | 1.92|
| | |acc_norm|39.78|± | 1.92|
|agieval_lsat_ar | 0|acc |23.48|± | 2.80|
| | |acc_norm|24.35|± | 2.84|
|agieval_lsat_lr | 0|acc |50.98|± | 2.22|
| | |acc_norm|51.96|± | 2.21|
|agieval_lsat_rc | 0|acc |62.08|± | 2.96|
| | |acc_norm|62.83|± | 2.95|
|agieval_sat_en | 0|acc |78.64|± | 2.86|
| | |acc_norm|79.13|± | 2.84|
|agieval_sat_en_without_passage| 0|acc |44.66|± | 3.47|
| | |acc_norm|44.66|± | 3.47|
|agieval_sat_math | 0|acc |37.27|± | 3.27|
| | |acc_norm|35.00|± | 3.22|
Average: 45.31%
### GPT4All
| Task |Version| Metric |Value| |Stderr|
|-------------|------:|--------|----:|---|-----:|
|arc_challenge| 0|acc |63.14|± | 1.41|
| | |acc_norm|64.51|± | 1.40|
|arc_easy | 0|acc |85.98|± | 0.71|
| | |acc_norm|82.49|± | 0.78|
|boolq | 1|acc |88.10|± | 0.57|
|hellaswag | 0|acc |66.31|± | 0.47|
| | |acc_norm|85.17|± | 0.35|
|openbookqa | 0|acc |38.00|± | 2.17|
| | |acc_norm|47.20|± | 2.23|
|piqa | 0|acc |83.35|± | 0.87|
| | |acc_norm|84.17|± | 0.85|
|winogrande | 0|acc |78.22|± | 1.16|
Average: 75.69%
### TruthfulQA
| Task |Version|Metric|Value| |Stderr|
|-------------|------:|------|----:|---|-----:|
|truthfulqa_mc| 1|mc1 |47.74|± | 1.75|
| | |mc2 |63.94|± | 1.53|
Average: 63.94%
### Bigbench
| Task |Version| Metric |Value| |Stderr|
|------------------------------------------------|------:|---------------------|----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|56.84|± | 3.60|
|bigbench_date_understanding | 0|multiple_choice_grade|66.12|± | 2.47|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|41.47|± | 3.07|
|bigbench_geometric_shapes | 0|multiple_choice_grade|22.01|± | 2.19|
| | |exact_str_match | 0.00|± | 0.00|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|31.40|± | 2.08|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|23.14|± | 1.60|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|56.00|± | 2.87|
|bigbench_movie_recommendation | 0|multiple_choice_grade|45.00|± | 2.23|
|bigbench_navigate | 0|multiple_choice_grade|50.70|± | 1.58|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|70.05|± | 1.02|
|bigbench_ruin_names | 0|multiple_choice_grade|45.54|± | 2.36|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|26.05|± | 1.39|
|bigbench_snarks | 0|multiple_choice_grade|71.82|± | 3.35|
|bigbench_sports_understanding | 0|multiple_choice_grade|72.92|± | 1.42|
|bigbench_temporal_sequences | 0|multiple_choice_grade|44.20|± | 1.57|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|22.80|± | 1.19|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|18.23|± | 0.92|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|56.00|± | 2.87|
Average: 45.57%
Average score: 57.63%
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "cstr/Spaetzle-v8-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## 🧩 Configuration
The model uses ChatML and should work well with this (as it is merged from models which (mostly) saw ChatML templates in training).
```yaml
models:
- model: mayflowergmbh/Wiedervereinigung-7b-dpo-laser
# no parameters necessary for base model
- model: flemmingmiguel/NeuDist-Ro-7B
parameters:
density: 0.60
weight: 0.30
- model: johannhartmann/Brezn3
parameters:
density: 0.65
weight: 0.40
- model: ResplendentAI/Flora_DPO_7B
parameters:
density: 0.6
weight: 0.3
merge_method: dare_ties
base_model: mayflowergmbh/Wiedervereinigung-7b-dpo-laser
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
tokenizer_source: base
```
|
{"base_model": ["flemmingmiguel/NeuDist-Ro-7B", "johannhartmann/Brezn3", "ResplendentAI/Flora_DPO_7B"], "language": ["de", "en"], "tags": ["merge", "mergekit", "lazymergekit", "flemmingmiguel/NeuDist-Ro-7B", "johannhartmann/Brezn3", "ResplendentAI/Flora_DPO_7B"]}
|
task
|
[
"TRANSLATION"
] | 42,543 |
csocsci/xlm-roberta-large-multi-label-en-iid
|
csocsci
|
feature-extraction
|
[
"transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"multilingual",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2023-09-21T12:46:22Z |
2023-09-22T11:57:06+00:00
| 7 | 0 |
---
language:
- multilingual
license: mit
---
# Model Card for xlm-roberta-large-multi-label-en-iid
<!-- Provide a quick summary of what the model is/does. -->
This model is fine-tuned for multi-label text classification of Supportive Interactions in Instant Messenger dialogs of Adolescents.
## Model Description
The model was fine-tuned on a dataset of Instant Messenger dialogs of Adolescents. The classification is multi-label and the model outputs probablities for labels {0,1,2,3,4,5}:
0. None
1. Informational Support
2. Emotional Support
3. Social Companionship
4. Appraisal
5. Instrumental Support
- **Developed by:** Anonymous
- **Language(s):** multilingual
- **Finetuned from:** xlm-roberta-large
## Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/chi2024submission
- **Paper:** Stay tuned!
## Usage
Here is how to use this model to classify a context-window of a dialogue:
```python
import numpy as np
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
# Prepare input texts. This model is pretrained on multi-lingual data
# and fine-tuned on English
test_texts = ['Utterance1;Utterance2;Utterance3']
# Load the model and tokenizer
model = AutoModelForSequenceClassification.from_pretrained(
'chi2024/xlm-roberta-large-multi-label-en-iid', num_labels=6).to("cuda")
tokenizer = AutoTokenizer.from_pretrained(
'chi2024/xlm-roberta-large-multi-label-en-iid',
use_fast=False, truncation_side='left')
assert tokenizer.truncation_side == 'left'
# Define helper functions
def predict_one(text: str, tok, mod, threshold=0.5):
encoding = tok(text, return_tensors="pt", truncation=True, padding=True,
max_length=256)
encoding = {k: v.to(mod.device) for k, v in encoding.items()}
outputs = mod(**encoding)
logits = outputs.logits
sigmoid = torch.nn.Sigmoid()
probs = sigmoid(logits.squeeze().cpu())
predictions = np.zeros(probs.shape)
predictions[np.where(probs >= threshold)] = 1
return predictions, probs
def print_predictions(texts):
preds = [predict_one(tt, tokenizer, model) for tt in texts]
for c, p in preds:
print(f'{c}: {p.tolist():.4f}')
# Run the prediction
print_predictions(test_texts)
```
| null |
Non_BioNLP
|
# Model Card for xlm-roberta-large-multi-label-en-iid
<!-- Provide a quick summary of what the model is/does. -->
This model is fine-tuned for multi-label text classification of Supportive Interactions in Instant Messenger dialogs of Adolescents.
## Model Description
The model was fine-tuned on a dataset of Instant Messenger dialogs of Adolescents. The classification is multi-label and the model outputs probablities for labels {0,1,2,3,4,5}:
0. None
1. Informational Support
2. Emotional Support
3. Social Companionship
4. Appraisal
5. Instrumental Support
- **Developed by:** Anonymous
- **Language(s):** multilingual
- **Finetuned from:** xlm-roberta-large
## Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/chi2024submission
- **Paper:** Stay tuned!
## Usage
Here is how to use this model to classify a context-window of a dialogue:
```python
import numpy as np
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
# Prepare input texts. This model is pretrained on multi-lingual data
# and fine-tuned on English
test_texts = ['Utterance1;Utterance2;Utterance3']
# Load the model and tokenizer
model = AutoModelForSequenceClassification.from_pretrained(
'chi2024/xlm-roberta-large-multi-label-en-iid', num_labels=6).to("cuda")
tokenizer = AutoTokenizer.from_pretrained(
'chi2024/xlm-roberta-large-multi-label-en-iid',
use_fast=False, truncation_side='left')
assert tokenizer.truncation_side == 'left'
# Define helper functions
def predict_one(text: str, tok, mod, threshold=0.5):
encoding = tok(text, return_tensors="pt", truncation=True, padding=True,
max_length=256)
encoding = {k: v.to(mod.device) for k, v in encoding.items()}
outputs = mod(**encoding)
logits = outputs.logits
sigmoid = torch.nn.Sigmoid()
probs = sigmoid(logits.squeeze().cpu())
predictions = np.zeros(probs.shape)
predictions[np.where(probs >= threshold)] = 1
return predictions, probs
def print_predictions(texts):
preds = [predict_one(tt, tokenizer, model) for tt in texts]
for c, p in preds:
print(f'{c}: {p.tolist():.4f}')
# Run the prediction
print_predictions(test_texts)
```
|
{"language": ["multilingual"], "license": "mit"}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,544 |
QianT/autotrain-english_translation-39667103325
|
QianT
|
translation
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autotrain",
"translation",
"en",
"zh",
"dataset:QianT/autotrain-data-english_translation",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-03-08T12:58:29Z |
2023-03-08T13:01:03+00:00
| 19 | 1 |
---
datasets:
- QianT/autotrain-data-english_translation
language:
- en
- zh
tags:
- autotrain
- translation
co2_eq_emissions:
emissions: 0.004448687551041184
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 39667103325
- CO2 Emissions (in grams): 0.0044
## Validation Metrics
- Loss: 0.959
- SacreBLEU: 21.605
- Gen len: 41.193
| null |
Non_BioNLP
|
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 39667103325
- CO2 Emissions (in grams): 0.0044
## Validation Metrics
- Loss: 0.959
- SacreBLEU: 21.605
- Gen len: 41.193
|
{"datasets": ["QianT/autotrain-data-english_translation"], "language": ["en", "zh"], "tags": ["autotrain", "translation"], "co2_eq_emissions": {"emissions": 0.004448687551041184}}
|
task
|
[
"TRANSLATION"
] | 42,545 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.