id
stringlengths
6
113
author
stringlengths
2
36
task_category
stringclasses
42 values
tags
listlengths
1
4.05k
created_time
timestamp[ns, tz=UTC]date
2022-03-02 23:29:04
2025-04-10 08:38:38
last_modified
stringdate
2020-05-14 13:13:12
2025-04-19 04:15:39
downloads
int64
0
118M
likes
int64
0
4.86k
README
stringlengths
30
1.01M
matched_bigbio_names
listlengths
1
8
is_bionlp
stringclasses
3 values
model_cards
stringlengths
0
1M
metadata
stringlengths
2
698k
source
stringclasses
2 values
matched_task
listlengths
1
10
__index_level_0__
int64
0
46.9k
jmwolf27/finetuning-sentiment-model-3000-samples
jmwolf27
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-06-28T02:00:06Z
2022-06-28T02:19:32+00:00
94
0
--- datasets: - imdb license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: finetuning-sentiment-model-3000-samples results: - task: type: text-classification name: Text Classification dataset: name: imdb type: imdb args: plain_text metrics: - type: accuracy value: 0.8766666666666667 name: Accuracy - type: f1 value: 0.877887788778878 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3167 - Accuracy: 0.8767 - F1: 0.8779 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3167 - Accuracy: 0.8767 - F1: 0.8779 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
{"datasets": ["imdb"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "finetuning-sentiment-model-3000-samples", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 0.8766666666666667, "name": "Accuracy"}, {"type": "f1", "value": 0.877887788778878, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
45,996
gokuls/distilbert_add_GLUE_Experiment_mrpc_96
gokuls
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-01-26T12:30:28Z
2023-01-26T12:32:26+00:00
132
0
--- datasets: - glue language: - en license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: distilbert_add_GLUE_Experiment_mrpc_96 results: - task: type: text-classification name: Text Classification dataset: name: GLUE MRPC type: glue config: mrpc split: validation args: mrpc metrics: - type: accuracy value: 0.6838235294117647 name: Accuracy - type: f1 value: 0.8122270742358079 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_add_GLUE_Experiment_mrpc_96 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.6239 - Accuracy: 0.6838 - F1: 0.8122 - Combined Score: 0.7480 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:| | 0.6686 | 1.0 | 15 | 0.6467 | 0.6838 | 0.8122 | 0.7480 | | 0.6433 | 2.0 | 30 | 0.6372 | 0.6838 | 0.8122 | 0.7480 | | 0.6378 | 3.0 | 45 | 0.6319 | 0.6838 | 0.8122 | 0.7480 | | 0.6344 | 4.0 | 60 | 0.6284 | 0.6838 | 0.8122 | 0.7480 | | 0.6343 | 5.0 | 75 | 0.6266 | 0.6838 | 0.8122 | 0.7480 | | 0.6299 | 6.0 | 90 | 0.6252 | 0.6838 | 0.8122 | 0.7480 | | 0.6335 | 7.0 | 105 | 0.6247 | 0.6838 | 0.8122 | 0.7480 | | 0.6308 | 8.0 | 120 | 0.6243 | 0.6838 | 0.8122 | 0.7480 | | 0.6306 | 9.0 | 135 | 0.6243 | 0.6838 | 0.8122 | 0.7480 | | 0.6302 | 10.0 | 150 | 0.6241 | 0.6838 | 0.8122 | 0.7480 | | 0.6296 | 11.0 | 165 | 0.6241 | 0.6838 | 0.8122 | 0.7480 | | 0.6305 | 12.0 | 180 | 0.6239 | 0.6838 | 0.8122 | 0.7480 | | 0.634 | 13.0 | 195 | 0.6242 | 0.6838 | 0.8122 | 0.7480 | | 0.63 | 14.0 | 210 | 0.6243 | 0.6838 | 0.8122 | 0.7480 | | 0.6314 | 15.0 | 225 | 0.6242 | 0.6838 | 0.8122 | 0.7480 | | 0.6286 | 16.0 | 240 | 0.6239 | 0.6838 | 0.8122 | 0.7480 | | 0.6326 | 17.0 | 255 | 0.6242 | 0.6838 | 0.8122 | 0.7480 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.8.0 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_add_GLUE_Experiment_mrpc_96 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.6239 - Accuracy: 0.6838 - F1: 0.8122 - Combined Score: 0.7480 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:| | 0.6686 | 1.0 | 15 | 0.6467 | 0.6838 | 0.8122 | 0.7480 | | 0.6433 | 2.0 | 30 | 0.6372 | 0.6838 | 0.8122 | 0.7480 | | 0.6378 | 3.0 | 45 | 0.6319 | 0.6838 | 0.8122 | 0.7480 | | 0.6344 | 4.0 | 60 | 0.6284 | 0.6838 | 0.8122 | 0.7480 | | 0.6343 | 5.0 | 75 | 0.6266 | 0.6838 | 0.8122 | 0.7480 | | 0.6299 | 6.0 | 90 | 0.6252 | 0.6838 | 0.8122 | 0.7480 | | 0.6335 | 7.0 | 105 | 0.6247 | 0.6838 | 0.8122 | 0.7480 | | 0.6308 | 8.0 | 120 | 0.6243 | 0.6838 | 0.8122 | 0.7480 | | 0.6306 | 9.0 | 135 | 0.6243 | 0.6838 | 0.8122 | 0.7480 | | 0.6302 | 10.0 | 150 | 0.6241 | 0.6838 | 0.8122 | 0.7480 | | 0.6296 | 11.0 | 165 | 0.6241 | 0.6838 | 0.8122 | 0.7480 | | 0.6305 | 12.0 | 180 | 0.6239 | 0.6838 | 0.8122 | 0.7480 | | 0.634 | 13.0 | 195 | 0.6242 | 0.6838 | 0.8122 | 0.7480 | | 0.63 | 14.0 | 210 | 0.6243 | 0.6838 | 0.8122 | 0.7480 | | 0.6314 | 15.0 | 225 | 0.6242 | 0.6838 | 0.8122 | 0.7480 | | 0.6286 | 16.0 | 240 | 0.6239 | 0.6838 | 0.8122 | 0.7480 | | 0.6326 | 17.0 | 255 | 0.6242 | 0.6838 | 0.8122 | 0.7480 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.8.0 - Tokenizers 0.13.2
{"datasets": ["glue"], "language": ["en"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert_add_GLUE_Experiment_mrpc_96", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE MRPC", "type": "glue", "config": "mrpc", "split": "validation", "args": "mrpc"}, "metrics": [{"type": "accuracy", "value": 0.6838235294117647, "name": "Accuracy"}, {"type": "f1", "value": 0.8122270742358079, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
45,997
Hiranmai49/judicial-summarization-Mistral-finetuned_mildsum_FL
Hiranmai49
null
[ "peft", "safetensors", "trl", "sft", "unsloth", "generated_from_trainer", "base_model:unsloth/mistral-7b-v0.3-bnb-4bit", "base_model:adapter:unsloth/mistral-7b-v0.3-bnb-4bit", "license:apache-2.0", "region:us" ]
2024-09-14T03:07:48Z
2024-09-14T04:46:33+00:00
7
0
--- base_model: unsloth/mistral-7b-v0.3-bnb-4bit library_name: peft license: apache-2.0 tags: - trl - sft - unsloth - generated_from_trainer model-index: - name: judicial-summarization-Mistral-finetuned_mildsum_FL results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # judicial-summarization-Mistral-finetuned_mildsum_FL This model is a fine-tuned version of [unsloth/mistral-7b-v0.3-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-v0.3-bnb-4bit) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.0214 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 3407 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.143 | 0.9991 | 273 | 1.2892 | | 1.1243 | 1.9982 | 546 | 1.2952 | | 0.8696 | 2.9973 | 819 | 1.3759 | | 0.593 | 4.0 | 1093 | 1.5161 | | 0.3457 | 4.9991 | 1366 | 1.7477 | | 0.1588 | 5.9945 | 1638 | 2.0214 | ### Framework versions - PEFT 0.12.0 - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # judicial-summarization-Mistral-finetuned_mildsum_FL This model is a fine-tuned version of [unsloth/mistral-7b-v0.3-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-v0.3-bnb-4bit) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.0214 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 3407 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.143 | 0.9991 | 273 | 1.2892 | | 1.1243 | 1.9982 | 546 | 1.2952 | | 0.8696 | 2.9973 | 819 | 1.3759 | | 0.593 | 4.0 | 1093 | 1.5161 | | 0.3457 | 4.9991 | 1366 | 1.7477 | | 0.1588 | 5.9945 | 1638 | 2.0214 | ### Framework versions - PEFT 0.12.0 - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
{"base_model": "unsloth/mistral-7b-v0.3-bnb-4bit", "library_name": "peft", "license": "apache-2.0", "tags": ["trl", "sft", "unsloth", "generated_from_trainer"], "model-index": [{"name": "judicial-summarization-Mistral-finetuned_mildsum_FL", "results": []}]}
task
[ "SUMMARIZATION" ]
45,998
dkoh12/distilbert-base-uncased-finetuned-clinc
dkoh12
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-05-05T20:57:52Z
2023-05-06T01:36:20+00:00
35
0
--- datasets: - clinc_oos license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-clinc results: - task: type: text-classification name: Text Classification dataset: name: clinc_oos type: clinc_oos config: plus split: validation args: plus metrics: - type: accuracy value: 0.9180645161290323 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7720 - Accuracy: 0.9181 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 318 | 3.2887 | 0.7419 | | 3.7868 | 2.0 | 636 | 1.8753 | 0.8371 | | 3.7868 | 3.0 | 954 | 1.1570 | 0.8961 | | 1.6927 | 4.0 | 1272 | 0.8573 | 0.9129 | | 0.9056 | 5.0 | 1590 | 0.7720 | 0.9181 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7720 - Accuracy: 0.9181 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 318 | 3.2887 | 0.7419 | | 3.7868 | 2.0 | 636 | 1.8753 | 0.8371 | | 3.7868 | 3.0 | 954 | 1.1570 | 0.8961 | | 1.6927 | 4.0 | 1272 | 0.8573 | 0.9129 | | 0.9056 | 5.0 | 1590 | 0.7720 | 0.9181 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
{"datasets": ["clinc_oos"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-clinc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos", "config": "plus", "split": "validation", "args": "plus"}, "metrics": [{"type": "accuracy", "value": 0.9180645161290323, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
45,999
RichardErkhov/rinna_-_bilingual-gpt-neox-4b-instruction-sft-4bits
RichardErkhov
null
[ "safetensors", "gpt_neox", "arxiv:2404.01657", "4-bit", "bitsandbytes", "region:us" ]
2024-10-06T10:57:19Z
2024-10-06T10:59:18+00:00
4
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) bilingual-gpt-neox-4b-instruction-sft - bnb 4bits - Model creator: https://huggingface.co/rinna/ - Original model: https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-sft/ Original model description: --- thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png license: mit datasets: - Anthropic/hh-rlhf language: - ja - en inference: false base_model: rinna/bilingual-gpt-neox-4b --- # bilingual-gpt-neox-4b-instruction-sft ![rinna-icon](./rinna.png) --- # Update - **2023/08/02** We uploaded the newly trained `rinna/bilingual-gpt-neox-4b-instruction-sft` with the MIT license. - Please refrain from using the previous model released on 2023/07/31 for commercial purposes if you have already downloaded it. - The new model released on 2023/08/02 is built from datasets with less strict licenses and has better evaluation performance, so we suggest using the new model. - For reference, we provide the MD5 checksum values for the `pytorch_model.bin` files of the previous and current models. - 2023/07/31 model: `edf190a323c0ae63f71476700fb0b462` - 2023/08/02 model: `de72aa5b66beee7b65783c96f687d186` - **2023/07/31** In the previously released `rinna/bilingual-gpt-neox-4b-instruction-sft`, we found that part of the training data (i.e. Openchat ShareGPT4 and WizardLM) have a non-commercial license, and thus it does not comply with **the MIT license**. We decided to remove the previous version and build a new SFT model from datasets with less strict licenses. The new model will be uploaded in a few days. We sincerely apologize for our careless mistake. --- # Overview This repository provides an English-Japanese bilingual GPT-NeoX model of 3.8 billion parameters. The model is based on [`rinna/bilingual-gpt-neox-4b`](https://huggingface.co/rinna/bilingual-gpt-neox-4b) and has been finetuned to serve as an instruction-following conversational agent. * **Model architecture** A 36-layer, 2816-hidden-size transformer-based language model. * **Fine-tuning** The fine-tuning data is the subset of the following datasets. * [Anthropic HH RLHF data](https://huggingface.co/datasets/Anthropic/hh-rlhf) and its Japanese translation * [FLAN Instruction Tuning data](https://github.com/google-research/FLAN) and its Japanese translation * **Model Series** | Variant | Link | | :-- | :--| | Bilingual 4B MiniGPT4 | https://huggingface.co/rinna/bilingual-gpt-neox-4b-minigpt4 | | Bilingual 4B PPO | https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-ppo | | Bilingual 4B SFT | https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-sft | | Bilingual 4B 8K | https://huggingface.co/rinna/bilingual-gpt-neox-4b-8k | | Bilingual 4B | https://huggingface.co/rinna/bilingual-gpt-neox-4b | | Japanese 3.6B PPO | https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-ppo | | Japanese 3.6B SFT-v2 | https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-sft-v2 | | Japanese 3.6B SFT | https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-sft | | Japanese 3.6B | https://huggingface.co/rinna/japanese-gpt-neox-3.6b | * **Contributors** [Tianyu Zhao](https://huggingface.co/tianyuz) and [Kei Sawada](https://huggingface.co/keisawada) --- # Benchmarking Our evaluation experiments suggest that the bilingual-gpt-neox-4b-instruction-sft model performs slightly better than the previous [Japanese GPT-NeoX 3.6B PPO](https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-ppo) in Japanese tasks. - *The 4-task average accuracy is based on results of JCommonsenseQA, JNLI, MARC-ja, and JSQuAD.* - *The 6-task average accuracy is based on results of JCommonsenseQA, JNLI, MARC-ja, JSQuAD, XWinograd, and JAQKET-v2.* | Model | 4-task average accuracy | 6-task average accuracy | | :-- | :-- | :-- | | bilingual-gpt-neox-4b-instruction-ppo | 61.01 | 61.16 | | **bilingual-gpt-neox-4b-instruction-sft** | **61.02** | **61.69** | | bilingual-gpt-neox-4b | 56.12 | 51.83 | | japanese-gpt-neox-3.6b-instruction-ppo | 59.86 | 60.07 | | japanese-gpt-neox-3.6b | 55.07 | 50.32 | --- # I/O Format A special format has been adopted to construct inputs. * An input prompt is formatted as a conversation between `ユーザー` and `システム`. * Each input utterance consists of (1) its speaker (`"ユーザー"` or `"システム"`), (2) a colon (`":"`), (3) a whitespace (`" "`), and (4) utterance text (e.g. `"世界で一番高い山は?"`). * The input prompt should be ended with `"システム: "` to acknowledge the model to generate a response. * All the utterances in the input prompt should be separated by a newline `\n`. Following is an example to construct input from a conversation. ~~~python prompt = [ { "speaker": "ユーザー", "text": "Hello, you are an assistant that helps me learn Japanese." }, { "speaker": "システム", "text": "Sure, what can I do for you?" }, { "speaker": "ユーザー", "text": "VRはなんですか。" } ] prompt = [ f"{uttr['speaker']}: {uttr['text']}" for uttr in prompt ] prompt = "\n".join(prompt) prompt = ( prompt + "\n" + "システム: " ) print(prompt) """ ユーザー: Hello, you are an assistant that helps me learn Japanese. システム: Sure, what can I do for you? ユーザー: VRはなんですか。 システム: """ ~~~ --- # How to use the model **Notice:** Since the model is **sensitive to decoding hyper-parameters** (e.g. `temperature`, `top_p`, `top_k`, `repetition_penalty`), it is suggested to explore the best setting for your task. ~~~~python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("rinna/bilingual-gpt-neox-4b-instruction-sft", use_fast=False) model = AutoModelForCausalLM.from_pretrained("rinna/bilingual-gpt-neox-4b-instruction-sft") if torch.cuda.is_available(): model = model.to("cuda") token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") with torch.no_grad(): output_ids = model.generate( token_ids.to(model.device), max_new_tokens=512, do_sample=True, temperature=1.0, top_p=0.85, pad_token_id=tokenizer.pad_token_id, bos_token_id=tokenizer.bos_token_id, eos_token_id=tokenizer.eos_token_id ) output = tokenizer.decode(output_ids.tolist()[0][token_ids.size(1):]) print(output) """VRとはVirtual Realityの略で、仮想現実とも呼ばれます。これは、コンピューターを使用して仮想世界を作り出し、仮想世界上でコンピューターのゲームや仮想世界を体験するための技術です。この技術は、コンピューターやモバイ ルデバイスの進歩によって、2015年以降、ますます普及しています。VRは、ゲームや仮想世界、その他のアプリケー ションなどのさまざまな分野で、コンピューターと人間の相互作用の新しい方法を提供しています。</s>""" ~~~~ --- # Tokenization The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer. * The tokenizer has a vocabulary size of 65,536. * It uses *byte fallback* to decompose unknown text pieces into UTF-8 byte pieces to avoid producing `<UNK>` tokens. * It can recognize *consecutive whitespaces*, *newlines*, and *tabs* to handle structured texts better. * We turned off the default behaviour of prepending leading whitespace because it is not beneficial for processing Japanese. * Specifically, single whitespace is always processed as one token so that any English word won't have a preceding whitespace like in many other tokenizers (e.g. `_Hello`). * This decision trades the English processing efficiency for a unified way to treat whitespaces. * It leads to a significantly lower loss of next token prediction on English data because whitespaces are easy to predict. * **Don't forget to set `use_fast=False` to make the above features function correctly.** --- # How to cite ```bibtex @misc{rinna-bilingual-gpt-neox-4b-instruction-sft, title = {rinna/bilingual-gpt-neox-4b-instruction-sft}, author = {Zhao, Tianyu and Sawada, Kei}, url = {https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-sft} } @inproceedings{sawada2024release, title = {Release of Pre-Trained Models for the {J}apanese Language}, author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh}, booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)}, month = {5}, year = {2024}, pages = {13898--13905}, url = {https://aclanthology.org/2024.lrec-main.1213}, note = {\url{https://arxiv.org/abs/2404.01657}} } ``` --- # Licenese [The MIT license](https://opensource.org/licenses/MIT)
null
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) bilingual-gpt-neox-4b-instruction-sft - bnb 4bits - Model creator: https://huggingface.co/rinna/ - Original model: https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-sft/ Original model description: --- thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png license: mit datasets: - Anthropic/hh-rlhf language: - ja - en inference: false base_model: rinna/bilingual-gpt-neox-4b --- # bilingual-gpt-neox-4b-instruction-sft ![rinna-icon](./rinna.png) --- # Update - **2023/08/02** We uploaded the newly trained `rinna/bilingual-gpt-neox-4b-instruction-sft` with the MIT license. - Please refrain from using the previous model released on 2023/07/31 for commercial purposes if you have already downloaded it. - The new model released on 2023/08/02 is built from datasets with less strict licenses and has better evaluation performance, so we suggest using the new model. - For reference, we provide the MD5 checksum values for the `pytorch_model.bin` files of the previous and current models. - 2023/07/31 model: `edf190a323c0ae63f71476700fb0b462` - 2023/08/02 model: `de72aa5b66beee7b65783c96f687d186` - **2023/07/31** In the previously released `rinna/bilingual-gpt-neox-4b-instruction-sft`, we found that part of the training data (i.e. Openchat ShareGPT4 and WizardLM) have a non-commercial license, and thus it does not comply with **the MIT license**. We decided to remove the previous version and build a new SFT model from datasets with less strict licenses. The new model will be uploaded in a few days. We sincerely apologize for our careless mistake. --- # Overview This repository provides an English-Japanese bilingual GPT-NeoX model of 3.8 billion parameters. The model is based on [`rinna/bilingual-gpt-neox-4b`](https://huggingface.co/rinna/bilingual-gpt-neox-4b) and has been finetuned to serve as an instruction-following conversational agent. * **Model architecture** A 36-layer, 2816-hidden-size transformer-based language model. * **Fine-tuning** The fine-tuning data is the subset of the following datasets. * [Anthropic HH RLHF data](https://huggingface.co/datasets/Anthropic/hh-rlhf) and its Japanese translation * [FLAN Instruction Tuning data](https://github.com/google-research/FLAN) and its Japanese translation * **Model Series** | Variant | Link | | :-- | :--| | Bilingual 4B MiniGPT4 | https://huggingface.co/rinna/bilingual-gpt-neox-4b-minigpt4 | | Bilingual 4B PPO | https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-ppo | | Bilingual 4B SFT | https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-sft | | Bilingual 4B 8K | https://huggingface.co/rinna/bilingual-gpt-neox-4b-8k | | Bilingual 4B | https://huggingface.co/rinna/bilingual-gpt-neox-4b | | Japanese 3.6B PPO | https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-ppo | | Japanese 3.6B SFT-v2 | https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-sft-v2 | | Japanese 3.6B SFT | https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-sft | | Japanese 3.6B | https://huggingface.co/rinna/japanese-gpt-neox-3.6b | * **Contributors** [Tianyu Zhao](https://huggingface.co/tianyuz) and [Kei Sawada](https://huggingface.co/keisawada) --- # Benchmarking Our evaluation experiments suggest that the bilingual-gpt-neox-4b-instruction-sft model performs slightly better than the previous [Japanese GPT-NeoX 3.6B PPO](https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-ppo) in Japanese tasks. - *The 4-task average accuracy is based on results of JCommonsenseQA, JNLI, MARC-ja, and JSQuAD.* - *The 6-task average accuracy is based on results of JCommonsenseQA, JNLI, MARC-ja, JSQuAD, XWinograd, and JAQKET-v2.* | Model | 4-task average accuracy | 6-task average accuracy | | :-- | :-- | :-- | | bilingual-gpt-neox-4b-instruction-ppo | 61.01 | 61.16 | | **bilingual-gpt-neox-4b-instruction-sft** | **61.02** | **61.69** | | bilingual-gpt-neox-4b | 56.12 | 51.83 | | japanese-gpt-neox-3.6b-instruction-ppo | 59.86 | 60.07 | | japanese-gpt-neox-3.6b | 55.07 | 50.32 | --- # I/O Format A special format has been adopted to construct inputs. * An input prompt is formatted as a conversation between `ユーザー` and `システム`. * Each input utterance consists of (1) its speaker (`"ユーザー"` or `"システム"`), (2) a colon (`":"`), (3) a whitespace (`" "`), and (4) utterance text (e.g. `"世界で一番高い山は?"`). * The input prompt should be ended with `"システム: "` to acknowledge the model to generate a response. * All the utterances in the input prompt should be separated by a newline `\n`. Following is an example to construct input from a conversation. ~~~python prompt = [ { "speaker": "ユーザー", "text": "Hello, you are an assistant that helps me learn Japanese." }, { "speaker": "システム", "text": "Sure, what can I do for you?" }, { "speaker": "ユーザー", "text": "VRはなんですか。" } ] prompt = [ f"{uttr['speaker']}: {uttr['text']}" for uttr in prompt ] prompt = "\n".join(prompt) prompt = ( prompt + "\n" + "システム: " ) print(prompt) """ ユーザー: Hello, you are an assistant that helps me learn Japanese. システム: Sure, what can I do for you? ユーザー: VRはなんですか。 システム: """ ~~~ --- # How to use the model **Notice:** Since the model is **sensitive to decoding hyper-parameters** (e.g. `temperature`, `top_p`, `top_k`, `repetition_penalty`), it is suggested to explore the best setting for your task. ~~~~python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("rinna/bilingual-gpt-neox-4b-instruction-sft", use_fast=False) model = AutoModelForCausalLM.from_pretrained("rinna/bilingual-gpt-neox-4b-instruction-sft") if torch.cuda.is_available(): model = model.to("cuda") token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") with torch.no_grad(): output_ids = model.generate( token_ids.to(model.device), max_new_tokens=512, do_sample=True, temperature=1.0, top_p=0.85, pad_token_id=tokenizer.pad_token_id, bos_token_id=tokenizer.bos_token_id, eos_token_id=tokenizer.eos_token_id ) output = tokenizer.decode(output_ids.tolist()[0][token_ids.size(1):]) print(output) """VRとはVirtual Realityの略で、仮想現実とも呼ばれます。これは、コンピューターを使用して仮想世界を作り出し、仮想世界上でコンピューターのゲームや仮想世界を体験するための技術です。この技術は、コンピューターやモバイ ルデバイスの進歩によって、2015年以降、ますます普及しています。VRは、ゲームや仮想世界、その他のアプリケー ションなどのさまざまな分野で、コンピューターと人間の相互作用の新しい方法を提供しています。</s>""" ~~~~ --- # Tokenization The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer. * The tokenizer has a vocabulary size of 65,536. * It uses *byte fallback* to decompose unknown text pieces into UTF-8 byte pieces to avoid producing `<UNK>` tokens. * It can recognize *consecutive whitespaces*, *newlines*, and *tabs* to handle structured texts better. * We turned off the default behaviour of prepending leading whitespace because it is not beneficial for processing Japanese. * Specifically, single whitespace is always processed as one token so that any English word won't have a preceding whitespace like in many other tokenizers (e.g. `_Hello`). * This decision trades the English processing efficiency for a unified way to treat whitespaces. * It leads to a significantly lower loss of next token prediction on English data because whitespaces are easy to predict. * **Don't forget to set `use_fast=False` to make the above features function correctly.** --- # How to cite ```bibtex @misc{rinna-bilingual-gpt-neox-4b-instruction-sft, title = {rinna/bilingual-gpt-neox-4b-instruction-sft}, author = {Zhao, Tianyu and Sawada, Kei}, url = {https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-sft} } @inproceedings{sawada2024release, title = {Release of Pre-Trained Models for the {J}apanese Language}, author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh}, booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)}, month = {5}, year = {2024}, pages = {13898--13905}, url = {https://aclanthology.org/2024.lrec-main.1213}, note = {\url{https://arxiv.org/abs/2404.01657}} } ``` --- # Licenese [The MIT license](https://opensource.org/licenses/MIT)
{}
task
[ "TRANSLATION" ]
46,000
elsayovita/bge-base-financial-matryoshka-testing
elsayovita
sentence-similarity
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:6300", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "en", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:BAAI/bge-base-en-v1.5", "base_model:finetune:BAAI/bge-base-en-v1.5", "license:apache-2.0", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-09-03T07:46:19Z
2024-09-03T07:47:06+00:00
7
0
--- base_model: BAAI/bge-base-en-v1.5 datasets: [] language: - en library_name: sentence-transformers license: apache-2.0 metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:6300 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss widget: - source_sentence: The net interest income for the first quarter of 2023 was $14,448 million. sentences: - What was the fair value of investments in fixed maturity securities at the end of 2023 after a hypothetical 100 basis point increase in interest rates? - What was the net interest income for the first quarter of 2023? - What are the expected consequences of the EMIR 3.0 proposals for ICE Futures Europe and ICE Clear Europe? - source_sentence: The consolidated financial statements and accompanying notes are listed in Part IV, Item 15(a)(1) of the Annual Report on Form 10-K sentences: - What was the total amount invested in purchases from Vebu during the year ended December 31, 2023? - What section of the Annual Report on Form 10-K includes the consolidated financial statements and accompanying notes? - What is the purpose of using constant currency to measure financial performance? - source_sentence: Cash provided by operating activities was impacted by the provision from the Tax Cuts and Jobs Act of 2017 which became effective in fiscal 2023 and requires the capitalization and amortization of research and development costs. The change increased our cash taxes paid in fiscal 2023. sentences: - How much did the provision from the Tax Cuts and Jobs Act increase the cash taxes paid in fiscal 2023? - What is the principal amount of debt maturing in fiscal year 2023? - What is the projected increase in effective tax rate starting from fiscal 2024? - source_sentence: Item 8. Financial Statements and Supplementary Data. sentences: - How does FedEx Express primarily fulfill its jet fuel needs? - What legislative act in the United States established a new corporate alternative minimum tax of 15% on large corporations? - What is the title of Item 8 that covers financial data in the report? - source_sentence: Electronic Arts paid cash dividends totaling $210 million during the fiscal year ended March 31, 2023. sentences: - What was the total cash dividend paid by Electronic Arts in the fiscal year ended March 31, 2023? - What was the SRO's accrued amount as a receivable for CAT implementation expenses as of December 31, 2023? - What percentage of our total U.S. dialysis patients in 2023 was covered under some form of government-based program? model-index: - name: BGE base Financial Matryoshka results: - task: type: information-retrieval name: Information Retrieval dataset: name: dim 768 type: dim_768 metrics: - type: cosine_accuracy@1 value: 0.6842857142857143 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.8128571428571428 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.86 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.8985714285714286 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.6842857142857143 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.27095238095238094 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.172 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.08985714285714284 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.6842857142857143 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.8128571428571428 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.86 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.8985714285714286 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.7929325221389678 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.7588820861678003 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.7629563080276819 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 512 type: dim_512 metrics: - type: cosine_accuracy@1 value: 0.6857142857142857 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.82 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.8585714285714285 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9057142857142857 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.6857142857142857 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.2733333333333333 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.1717142857142857 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09057142857142857 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.6857142857142857 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.82 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.8585714285714285 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.9057142857142857 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.7963845502294126 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.7614115646258502 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.7648837754793252 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 256 type: dim_256 metrics: - type: cosine_accuracy@1 value: 0.6771428571428572 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.8042857142857143 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.8571428571428571 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.89 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.6771428571428572 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.2680952380952381 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.17142857142857137 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.08899999999999998 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.6771428571428572 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.8042857142857143 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.8571428571428571 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.89 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.784627431591255 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.7506218820861676 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.7549970210504993 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 128 type: dim_128 metrics: - type: cosine_accuracy@1 value: 0.6614285714285715 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.7957142857142857 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.8271428571428572 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.88 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.6614285714285715 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.2652380952380952 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.1654285714285714 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.088 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.6614285714285715 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.7957142857142857 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.8271428571428572 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.88 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.7728766261768507 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.7384614512471652 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.74301468254304 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 64 type: dim_64 metrics: - type: cosine_accuracy@1 value: 0.6128571428571429 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.7628571428571429 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.7957142857142857 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.8471428571428572 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.6128571428571429 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.2542857142857143 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.15914285714285714 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.0847142857142857 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.6128571428571429 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.7628571428571429 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.7957142857142857 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.8471428571428572 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.7315764159717033 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.6946094104308389 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.7001749041654559 name: Cosine Map@100 --- # BGE base Financial Matryoshka This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("elsayovita/bge-base-financial-matryoshka-testing") # Run inference sentences = [ 'Electronic Arts paid cash dividends totaling $210 million during the fiscal year ended March 31, 2023.', 'What was the total cash dividend paid by Electronic Arts in the fiscal year ended March 31, 2023?', "What was the SRO's accrued amount as a receivable for CAT implementation expenses as of December 31, 2023?", ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `dim_768` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:----------| | cosine_accuracy@1 | 0.6843 | | cosine_accuracy@3 | 0.8129 | | cosine_accuracy@5 | 0.86 | | cosine_accuracy@10 | 0.8986 | | cosine_precision@1 | 0.6843 | | cosine_precision@3 | 0.271 | | cosine_precision@5 | 0.172 | | cosine_precision@10 | 0.0899 | | cosine_recall@1 | 0.6843 | | cosine_recall@3 | 0.8129 | | cosine_recall@5 | 0.86 | | cosine_recall@10 | 0.8986 | | cosine_ndcg@10 | 0.7929 | | cosine_mrr@10 | 0.7589 | | **cosine_map@100** | **0.763** | #### Information Retrieval * Dataset: `dim_512` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.6857 | | cosine_accuracy@3 | 0.82 | | cosine_accuracy@5 | 0.8586 | | cosine_accuracy@10 | 0.9057 | | cosine_precision@1 | 0.6857 | | cosine_precision@3 | 0.2733 | | cosine_precision@5 | 0.1717 | | cosine_precision@10 | 0.0906 | | cosine_recall@1 | 0.6857 | | cosine_recall@3 | 0.82 | | cosine_recall@5 | 0.8586 | | cosine_recall@10 | 0.9057 | | cosine_ndcg@10 | 0.7964 | | cosine_mrr@10 | 0.7614 | | **cosine_map@100** | **0.7649** | #### Information Retrieval * Dataset: `dim_256` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:----------| | cosine_accuracy@1 | 0.6771 | | cosine_accuracy@3 | 0.8043 | | cosine_accuracy@5 | 0.8571 | | cosine_accuracy@10 | 0.89 | | cosine_precision@1 | 0.6771 | | cosine_precision@3 | 0.2681 | | cosine_precision@5 | 0.1714 | | cosine_precision@10 | 0.089 | | cosine_recall@1 | 0.6771 | | cosine_recall@3 | 0.8043 | | cosine_recall@5 | 0.8571 | | cosine_recall@10 | 0.89 | | cosine_ndcg@10 | 0.7846 | | cosine_mrr@10 | 0.7506 | | **cosine_map@100** | **0.755** | #### Information Retrieval * Dataset: `dim_128` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:----------| | cosine_accuracy@1 | 0.6614 | | cosine_accuracy@3 | 0.7957 | | cosine_accuracy@5 | 0.8271 | | cosine_accuracy@10 | 0.88 | | cosine_precision@1 | 0.6614 | | cosine_precision@3 | 0.2652 | | cosine_precision@5 | 0.1654 | | cosine_precision@10 | 0.088 | | cosine_recall@1 | 0.6614 | | cosine_recall@3 | 0.7957 | | cosine_recall@5 | 0.8271 | | cosine_recall@10 | 0.88 | | cosine_ndcg@10 | 0.7729 | | cosine_mrr@10 | 0.7385 | | **cosine_map@100** | **0.743** | #### Information Retrieval * Dataset: `dim_64` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.6129 | | cosine_accuracy@3 | 0.7629 | | cosine_accuracy@5 | 0.7957 | | cosine_accuracy@10 | 0.8471 | | cosine_precision@1 | 0.6129 | | cosine_precision@3 | 0.2543 | | cosine_precision@5 | 0.1591 | | cosine_precision@10 | 0.0847 | | cosine_recall@1 | 0.6129 | | cosine_recall@3 | 0.7629 | | cosine_recall@5 | 0.7957 | | cosine_recall@10 | 0.8471 | | cosine_ndcg@10 | 0.7316 | | cosine_mrr@10 | 0.6946 | | **cosine_map@100** | **0.7002** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 6,300 training samples * Columns: <code>positive</code> and <code>anchor</code> * Approximate statistics based on the first 1000 samples: | | positive | anchor | |:--------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 46.86 tokens</li><li>max: 252 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 20.5 tokens</li><li>max: 51 tokens</li></ul> | * Samples: | positive | anchor | |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------| | <code>For the year ended December 31, 2023, the average balance for savings and transaction accounts was $86,102 and the interest expense for these accounts was $3,357.</code> | <code>What was the average balance and interest expense for savings and transaction accounts in the year 2023?</code> | | <code>Limits are used at various levels and types to manage the size of liquidity exposures, relative to acceptable risk levels according the the organization's liquidity risk tolerance.</code> | <code>What is the purpose of the liquidity risk limits used by the organization?</code> | | <code>Value-Based Care refers to the goal of incentivizing healthcare providers to simultaneously increase quality while lowering the cost of care for patients.</code> | <code>What is the primary goal of value-based care according to the company?</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 16 - `gradient_accumulation_steps`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 2 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `bf16`: True - `tf32`: False - `load_best_model_at_end`: True - `optim`: adamw_torch_fused - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 16 - `eval_accumulation_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 2 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: False - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 | |:----------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:| | 0.8122 | 10 | 1.4746 | - | - | - | - | - | | 0.9746 | 12 | - | 0.7378 | 0.7470 | 0.7589 | 0.6941 | 0.7563 | | 1.6244 | 20 | 0.6694 | - | - | - | - | - | | **1.9492** | **24** | **-** | **0.743** | **0.755** | **0.7649** | **0.7002** | **0.763** | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.0.1 - Transformers: 4.42.4 - PyTorch: 2.4.0+cu121 - Accelerate: 0.32.1 - Datasets: 2.21.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# BGE base Financial Matryoshka This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("elsayovita/bge-base-financial-matryoshka-testing") # Run inference sentences = [ 'Electronic Arts paid cash dividends totaling $210 million during the fiscal year ended March 31, 2023.', 'What was the total cash dividend paid by Electronic Arts in the fiscal year ended March 31, 2023?', "What was the SRO's accrued amount as a receivable for CAT implementation expenses as of December 31, 2023?", ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `dim_768` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:----------| | cosine_accuracy@1 | 0.6843 | | cosine_accuracy@3 | 0.8129 | | cosine_accuracy@5 | 0.86 | | cosine_accuracy@10 | 0.8986 | | cosine_precision@1 | 0.6843 | | cosine_precision@3 | 0.271 | | cosine_precision@5 | 0.172 | | cosine_precision@10 | 0.0899 | | cosine_recall@1 | 0.6843 | | cosine_recall@3 | 0.8129 | | cosine_recall@5 | 0.86 | | cosine_recall@10 | 0.8986 | | cosine_ndcg@10 | 0.7929 | | cosine_mrr@10 | 0.7589 | | **cosine_map@100** | **0.763** | #### Information Retrieval * Dataset: `dim_512` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.6857 | | cosine_accuracy@3 | 0.82 | | cosine_accuracy@5 | 0.8586 | | cosine_accuracy@10 | 0.9057 | | cosine_precision@1 | 0.6857 | | cosine_precision@3 | 0.2733 | | cosine_precision@5 | 0.1717 | | cosine_precision@10 | 0.0906 | | cosine_recall@1 | 0.6857 | | cosine_recall@3 | 0.82 | | cosine_recall@5 | 0.8586 | | cosine_recall@10 | 0.9057 | | cosine_ndcg@10 | 0.7964 | | cosine_mrr@10 | 0.7614 | | **cosine_map@100** | **0.7649** | #### Information Retrieval * Dataset: `dim_256` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:----------| | cosine_accuracy@1 | 0.6771 | | cosine_accuracy@3 | 0.8043 | | cosine_accuracy@5 | 0.8571 | | cosine_accuracy@10 | 0.89 | | cosine_precision@1 | 0.6771 | | cosine_precision@3 | 0.2681 | | cosine_precision@5 | 0.1714 | | cosine_precision@10 | 0.089 | | cosine_recall@1 | 0.6771 | | cosine_recall@3 | 0.8043 | | cosine_recall@5 | 0.8571 | | cosine_recall@10 | 0.89 | | cosine_ndcg@10 | 0.7846 | | cosine_mrr@10 | 0.7506 | | **cosine_map@100** | **0.755** | #### Information Retrieval * Dataset: `dim_128` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:----------| | cosine_accuracy@1 | 0.6614 | | cosine_accuracy@3 | 0.7957 | | cosine_accuracy@5 | 0.8271 | | cosine_accuracy@10 | 0.88 | | cosine_precision@1 | 0.6614 | | cosine_precision@3 | 0.2652 | | cosine_precision@5 | 0.1654 | | cosine_precision@10 | 0.088 | | cosine_recall@1 | 0.6614 | | cosine_recall@3 | 0.7957 | | cosine_recall@5 | 0.8271 | | cosine_recall@10 | 0.88 | | cosine_ndcg@10 | 0.7729 | | cosine_mrr@10 | 0.7385 | | **cosine_map@100** | **0.743** | #### Information Retrieval * Dataset: `dim_64` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.6129 | | cosine_accuracy@3 | 0.7629 | | cosine_accuracy@5 | 0.7957 | | cosine_accuracy@10 | 0.8471 | | cosine_precision@1 | 0.6129 | | cosine_precision@3 | 0.2543 | | cosine_precision@5 | 0.1591 | | cosine_precision@10 | 0.0847 | | cosine_recall@1 | 0.6129 | | cosine_recall@3 | 0.7629 | | cosine_recall@5 | 0.7957 | | cosine_recall@10 | 0.8471 | | cosine_ndcg@10 | 0.7316 | | cosine_mrr@10 | 0.6946 | | **cosine_map@100** | **0.7002** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 6,300 training samples * Columns: <code>positive</code> and <code>anchor</code> * Approximate statistics based on the first 1000 samples: | | positive | anchor | |:--------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 46.86 tokens</li><li>max: 252 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 20.5 tokens</li><li>max: 51 tokens</li></ul> | * Samples: | positive | anchor | |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------| | <code>For the year ended December 31, 2023, the average balance for savings and transaction accounts was $86,102 and the interest expense for these accounts was $3,357.</code> | <code>What was the average balance and interest expense for savings and transaction accounts in the year 2023?</code> | | <code>Limits are used at various levels and types to manage the size of liquidity exposures, relative to acceptable risk levels according the the organization's liquidity risk tolerance.</code> | <code>What is the purpose of the liquidity risk limits used by the organization?</code> | | <code>Value-Based Care refers to the goal of incentivizing healthcare providers to simultaneously increase quality while lowering the cost of care for patients.</code> | <code>What is the primary goal of value-based care according to the company?</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 16 - `gradient_accumulation_steps`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 2 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `bf16`: True - `tf32`: False - `load_best_model_at_end`: True - `optim`: adamw_torch_fused - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 16 - `eval_accumulation_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 2 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: False - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 | |:----------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:| | 0.8122 | 10 | 1.4746 | - | - | - | - | - | | 0.9746 | 12 | - | 0.7378 | 0.7470 | 0.7589 | 0.6941 | 0.7563 | | 1.6244 | 20 | 0.6694 | - | - | - | - | - | | **1.9492** | **24** | **-** | **0.743** | **0.755** | **0.7649** | **0.7002** | **0.763** | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.0.1 - Transformers: 4.42.4 - PyTorch: 2.4.0+cu121 - Accelerate: 0.32.1 - Datasets: 2.21.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "BAAI/bge-base-en-v1.5", "datasets": [], "language": ["en"], "library_name": "sentence-transformers", "license": "apache-2.0", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_recall@1", "cosine_recall@3", "cosine_recall@5", "cosine_recall@10", "cosine_ndcg@10", "cosine_mrr@10", "cosine_map@100"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:6300", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "The net interest income for the first quarter of 2023 was $14,448 million.", "sentences": ["What was the fair value of investments in fixed maturity securities at the end of 2023 after a hypothetical 100 basis point increase in interest rates?", "What was the net interest income for the first quarter of 2023?", "What are the expected consequences of the EMIR 3.0 proposals for ICE Futures Europe and ICE Clear Europe?"]}, {"source_sentence": "The consolidated financial statements and accompanying notes are listed in Part IV, Item 15(a)(1) of the Annual Report on Form 10-K", "sentences": ["What was the total amount invested in purchases from Vebu during the year ended December 31, 2023?", "What section of the Annual Report on Form 10-K includes the consolidated financial statements and accompanying notes?", "What is the purpose of using constant currency to measure financial performance?"]}, {"source_sentence": "Cash provided by operating activities was impacted by the provision from the Tax Cuts and Jobs Act of 2017 which became effective in fiscal 2023 and requires the capitalization and amortization of research and development costs. The change increased our cash taxes paid in fiscal 2023.", "sentences": ["How much did the provision from the Tax Cuts and Jobs Act increase the cash taxes paid in fiscal 2023?", "What is the principal amount of debt maturing in fiscal year 2023?", "What is the projected increase in effective tax rate starting from fiscal 2024?"]}, {"source_sentence": "Item 8. Financial Statements and Supplementary Data.", "sentences": ["How does FedEx Express primarily fulfill its jet fuel needs?", "What legislative act in the United States established a new corporate alternative minimum tax of 15% on large corporations?", "What is the title of Item 8 that covers financial data in the report?"]}, {"source_sentence": "Electronic Arts paid cash dividends totaling $210 million during the fiscal year ended March 31, 2023.", "sentences": ["What was the total cash dividend paid by Electronic Arts in the fiscal year ended March 31, 2023?", "What was the SRO's accrued amount as a receivable for CAT implementation expenses as of December 31, 2023?", "What percentage of our total U.S. dialysis patients in 2023 was covered under some form of government-based program?"]}], "model-index": [{"name": "BGE base Financial Matryoshka", "results": [{"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 768", "type": "dim_768"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.6842857142857143, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.8128571428571428, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.86, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.8985714285714286, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.6842857142857143, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.27095238095238094, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.172, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.08985714285714284, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.6842857142857143, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.8128571428571428, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.86, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.8985714285714286, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.7929325221389678, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.7588820861678003, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.7629563080276819, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 512", "type": "dim_512"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.6857142857142857, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.82, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.8585714285714285, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.9057142857142857, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.6857142857142857, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.2733333333333333, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.1717142857142857, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09057142857142857, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.6857142857142857, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.82, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.8585714285714285, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.9057142857142857, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.7963845502294126, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.7614115646258502, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.7648837754793252, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 256", "type": "dim_256"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.6771428571428572, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.8042857142857143, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.8571428571428571, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.89, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.6771428571428572, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.2680952380952381, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.17142857142857137, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.08899999999999998, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.6771428571428572, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.8042857142857143, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.8571428571428571, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.89, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.784627431591255, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.7506218820861676, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.7549970210504993, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 128", "type": "dim_128"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.6614285714285715, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.7957142857142857, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.8271428571428572, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.88, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.6614285714285715, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.2652380952380952, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.1654285714285714, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.088, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.6614285714285715, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.7957142857142857, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.8271428571428572, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.88, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.7728766261768507, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.7384614512471652, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.74301468254304, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 64", "type": "dim_64"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.6128571428571429, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.7628571428571429, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.7957142857142857, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.8471428571428572, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.6128571428571429, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.2542857142857143, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.15914285714285714, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.0847142857142857, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.6128571428571429, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.7628571428571429, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.7957142857142857, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.8471428571428572, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.7315764159717033, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.6946094104308389, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.7001749041654559, "name": "Cosine Map@100"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,001
spacemanidol/flan-t5-small-xsum
spacemanidol
text2text-generation
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "dataset:xsum", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-02-12T13:20:26Z
2023-02-12T14:41:33+00:00
10
0
--- datasets: - xsum license: apache-2.0 metrics: - rouge tags: - generated_from_trainer model-index: - name: small results: - task: type: summarization name: Summarization dataset: name: xsum type: xsum config: default split: validation args: default metrics: - type: rouge value: 33.2675 name: Rouge1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 2.0998 - Rouge1: 33.2675 - Rouge2: 11.0862 - Rougel: 26.1709 - Rougelsum: 26.1668 - Gen Len: 28.0123 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1 - Tokenizers 0.12.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 2.0998 - Rouge1: 33.2675 - Rouge2: 11.0862 - Rougel: 26.1709 - Rougelsum: 26.1668 - Gen Len: 28.0123 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1 - Tokenizers 0.12.1
{"datasets": ["xsum"], "license": "apache-2.0", "metrics": ["rouge"], "tags": ["generated_from_trainer"], "model-index": [{"name": "small", "results": [{"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "xsum", "type": "xsum", "config": "default", "split": "validation", "args": "default"}, "metrics": [{"type": "rouge", "value": 33.2675, "name": "Rouge1"}]}]}]}
task
[ "SUMMARIZATION" ]
46,002
RichardErkhov/meta-llama_-_Llama-3.2-1B-4bits
RichardErkhov
null
[ "safetensors", "llama", "arxiv:2204.05149", "4-bit", "bitsandbytes", "region:us" ]
2024-10-06T09:44:55Z
2024-10-06T09:45:53+00:00
7
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama-3.2-1B - bnb 4bits - Model creator: https://huggingface.co/meta-llama/ - Original model: https://huggingface.co/meta-llama/Llama-3.2-1B/ Original model description: --- language: - en - de - fr - it - pt - hi - es - th library_name: transformers pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 license: llama3.2 extra_gated_prompt: >- ### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT Llama 3.2 Version Release Date: September 25, 2024 “Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. “Documentation” means the specifications, manuals and documentation accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview. “Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. “Llama 3.2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://www.llama.com/llama-downloads. “Llama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion thereof) made available under this Agreement. “Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Llama” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama” at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference into this Agreement. 2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Llama 3.2 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy). #### Prohibited Uses We want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law 5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials 6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by Meta  2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following: 8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997 9. Guns and illegal weapons (including weapon development) 10. Illegal drugs and regulated/controlled substances 11. Operation of critical infrastructure, transportation technologies, or heavy machinery 12. Self-harm or harm to others, including suicide, cutting, and eating disorders 13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following: 14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 16. Generating, promoting, or further distributing spam 17. Impersonating another individual without consent, authorization, or legal right 18. Representing that the use of Llama 3.2 or outputs are human-generated 19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement  4. Fail to appropriately disclose to end users any known dangers of your AI system 5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2 With respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models. Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ) * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: [email protected] extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: >- The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- ## Model Information The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model Developer:** Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. | | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | **Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). **Feedback:** Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama-models/tree/main/models/llama3_2). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card. ## How to use This repository contains two versions of Llama-3.2-1B, for use with transformers and with the original `llama` codebase. ### Use with transformers Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function. Make sure to update your transformers installation via pip install --upgrade transformers. ```python import torch from transformers import pipeline model_id = "meta-llama/Llama-3.2-1B" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto" ) pipe("The key to life is") ``` ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama). To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Llama-3.2-1B --include "original/*" --local-dir Llama-3.2-1B ``` ## Hardware and Software **Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure. **Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. ## **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq. | | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) | | :---- | :---: | ----- | :---: | :---: | :---: | | Llama 3.2 1B | 370k | \- | 700 | 107 | 0 | | Llama 3.2 3B | 460k | \- | 700 | 133 | 0 | | Total | 830k | 86k | | 240 | 0 | The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO). **Data Freshness:** The pretraining data has a cutoff of December 2023\. ## Benchmarks \- English Text In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library. ### Base Pretrained Models | Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | ----- | ----- | :---: | :---: | :---: | :---: | :---: | | General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 | | | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 | | | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 | | Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 | | | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 | | | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 | | Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 | ### Instruction Tuned Models | Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | | General | | MMLU | 5 | macro\_avg/acc | 49.3 | 63.4 | 69.4 | | Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 40.1 | 40.9 | | Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 19.0 | 17.2 | | Instruction following | | IFEval | 0 | avg(prompt/instruction acc loose/strict) | 59.5 | 77.4 | 80.4 | | Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 77.7 | 84.5 | | | | MATH (CoT) | 0 | final\_em | 30.6 | 47.3 | 51.9 | | Reasoning | | ARC-C | 0 | acc | 59.4 | 78.6 | 83.4 | | | | GPQA | 0 | acc | 27.2 | 32.8 | 32.8 | | | | Hellaswag | 0 | acc | 41.2 | 69.8 | 78.7 | | Tool Use | | BFCL V2 | 0 | acc | 25.7 | 67.0 | 70.9 | | | | Nexus | 0 | macro\_avg/acc | 13.5 | 34.3 | 38.5 | | Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | 19.8 | 27.3 | | | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | 63.3 | 72.2 | | | | NIH/Multi-needle | 0 | recall | 75.0 | 84.7 | 98.8 | | Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 58.2 | 68.9 | ### Multilingual Benchmarks | Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | :---: | :---: | :---: | :---: | :---: | :---: | | General | MMLU (5-shot, macro\_avg/acc) | Portuguese | 39.82 | 54.48 | 62.12 | | | | Spanish | 41.5 | 55.1 | 62.5 | | | | Italian | 39.8 | 53.8 | 61.6 | | | | German | 39.2 | 53.3 | 60.6 | | | | French | 40.5 | 54.6 | 62.3 | | | | Hindi | 33.5 | 43.3 | 50.9 | | | | Thai | 34.7 | 44.5 | 50.3 | ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: 1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama 2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm 3. Provide protections for the community to help prevent the misuse of our models ### Responsible Deployment **Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/). #### Llama 3.2 Instruct **Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/). **Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.2 Systems **Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. ### New Capabilities and Use Cases **Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well. **Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version. ### Evaluations **Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. **Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical Risks In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas: **1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models. **2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models. ### Community **Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). **Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). **Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations **Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. **Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
null
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama-3.2-1B - bnb 4bits - Model creator: https://huggingface.co/meta-llama/ - Original model: https://huggingface.co/meta-llama/Llama-3.2-1B/ Original model description: --- language: - en - de - fr - it - pt - hi - es - th library_name: transformers pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 license: llama3.2 extra_gated_prompt: >- ### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT Llama 3.2 Version Release Date: September 25, 2024 “Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. “Documentation” means the specifications, manuals and documentation accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview. “Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. “Llama 3.2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://www.llama.com/llama-downloads. “Llama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion thereof) made available under this Agreement. “Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Llama” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama” at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference into this Agreement. 2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Llama 3.2 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy). #### Prohibited Uses We want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law 5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials 6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by Meta  2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following: 8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997 9. Guns and illegal weapons (including weapon development) 10. Illegal drugs and regulated/controlled substances 11. Operation of critical infrastructure, transportation technologies, or heavy machinery 12. Self-harm or harm to others, including suicide, cutting, and eating disorders 13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following: 14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 16. Generating, promoting, or further distributing spam 17. Impersonating another individual without consent, authorization, or legal right 18. Representing that the use of Llama 3.2 or outputs are human-generated 19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement  4. Fail to appropriately disclose to end users any known dangers of your AI system 5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2 With respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models. Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ) * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: [email protected] extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: >- The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- ## Model Information The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model Developer:** Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. | | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | **Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). **Feedback:** Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama-models/tree/main/models/llama3_2). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card. ## How to use This repository contains two versions of Llama-3.2-1B, for use with transformers and with the original `llama` codebase. ### Use with transformers Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function. Make sure to update your transformers installation via pip install --upgrade transformers. ```python import torch from transformers import pipeline model_id = "meta-llama/Llama-3.2-1B" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto" ) pipe("The key to life is") ``` ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama). To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Llama-3.2-1B --include "original/*" --local-dir Llama-3.2-1B ``` ## Hardware and Software **Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure. **Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. ## **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq. | | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) | | :---- | :---: | ----- | :---: | :---: | :---: | | Llama 3.2 1B | 370k | \- | 700 | 107 | 0 | | Llama 3.2 3B | 460k | \- | 700 | 133 | 0 | | Total | 830k | 86k | | 240 | 0 | The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO). **Data Freshness:** The pretraining data has a cutoff of December 2023\. ## Benchmarks \- English Text In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library. ### Base Pretrained Models | Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | ----- | ----- | :---: | :---: | :---: | :---: | :---: | | General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 | | | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 | | | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 | | Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 | | | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 | | | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 | | Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 | ### Instruction Tuned Models | Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | | General | | MMLU | 5 | macro\_avg/acc | 49.3 | 63.4 | 69.4 | | Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 40.1 | 40.9 | | Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 19.0 | 17.2 | | Instruction following | | IFEval | 0 | avg(prompt/instruction acc loose/strict) | 59.5 | 77.4 | 80.4 | | Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 77.7 | 84.5 | | | | MATH (CoT) | 0 | final\_em | 30.6 | 47.3 | 51.9 | | Reasoning | | ARC-C | 0 | acc | 59.4 | 78.6 | 83.4 | | | | GPQA | 0 | acc | 27.2 | 32.8 | 32.8 | | | | Hellaswag | 0 | acc | 41.2 | 69.8 | 78.7 | | Tool Use | | BFCL V2 | 0 | acc | 25.7 | 67.0 | 70.9 | | | | Nexus | 0 | macro\_avg/acc | 13.5 | 34.3 | 38.5 | | Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | 19.8 | 27.3 | | | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | 63.3 | 72.2 | | | | NIH/Multi-needle | 0 | recall | 75.0 | 84.7 | 98.8 | | Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 58.2 | 68.9 | ### Multilingual Benchmarks | Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | :---: | :---: | :---: | :---: | :---: | :---: | | General | MMLU (5-shot, macro\_avg/acc) | Portuguese | 39.82 | 54.48 | 62.12 | | | | Spanish | 41.5 | 55.1 | 62.5 | | | | Italian | 39.8 | 53.8 | 61.6 | | | | German | 39.2 | 53.3 | 60.6 | | | | French | 40.5 | 54.6 | 62.3 | | | | Hindi | 33.5 | 43.3 | 50.9 | | | | Thai | 34.7 | 44.5 | 50.3 | ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: 1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama 2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm 3. Provide protections for the community to help prevent the misuse of our models ### Responsible Deployment **Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/). #### Llama 3.2 Instruct **Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/). **Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.2 Systems **Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. ### New Capabilities and Use Cases **Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well. **Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version. ### Evaluations **Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. **Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical Risks In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas: **1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models. **2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models. ### Community **Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). **Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). **Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations **Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. **Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
{}
task
[ "SUMMARIZATION" ]
46,003
yeefever/not-real-facts2
yeefever
summarization
[ "transformers", "pytorch", "t5", "text2text-generation", "summarization", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-09-07T22:26:36Z
2023-09-08T02:36:16+00:00
15
0
--- pipeline_tag: summarization ---
null
Non_BioNLP
{"pipeline_tag": "summarization"}
task
[ "SUMMARIZATION" ]
46,004
morit/german_xlm_xnli
morit
zero-shot-classification
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "zero-shot-classification", "de", "dataset:xnli", "arxiv:1911.02116", "arxiv:2104.12250", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-12-29T16:49:50Z
2023-01-24T08:45:56+00:00
147
3
--- datasets: - xnli language: - de license: mit metrics: - accuracy pipeline_tag: zero-shot-classification --- # XLM-ROBERTA-BASE-XNLI-DE ## Model description This model takes the XLM-Roberta-base model which has been continued to pre-traine on a large corpus of Twitter in multiple languages. It was developed following a similar strategy as introduced as part of the [Tweet Eval](https://github.com/cardiffnlp/tweeteval) framework. The model is further finetuned on the german part of the XNLI training dataset. ## Intended Usage This model was developed to do Zero-Shot Text Classification in the realm of Hate Speech Detection. It is focused on the language of german as it was finetuned on data in said language. Since the base model was pre-trained on 100 different languages it has shown some effectiveness in other languages. Please refer to the list of languages in the [XLM Roberta paper](https://arxiv.org/abs/1911.02116) ### Usage with Zero-Shot Classification pipeline ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="morit/german_xlm_xnli") ``` After loading the model you can classify sequences in the languages mentioned above. You can specify your sequences and a matching hypothesis to be able to classify your proposed candidate labels. ```python sequence_to_classify = "Ich glaube Olaf Scholz wird sich im Bundestag durchsetzen." # we can specify candidate labels and hypothesis: candidate_labels = ["Politik", "Sport"] hypothesis_template = "Dieses Beispiel ist {}" # classify using the information provided classifier(sequence_to_classify, candidate_labels, hypothesis_template=hypothesis_template) # Output # {'sequence': 'Ich glaube Olaf Scholz wird sich im Bundestag durchsetzen.', # 'labels': ['Politik', 'Sport'], # 'scores': [0.6338292956352234, 0.3661706745624542]} ``` ## Training This model was pre-trained on a set of 100 languages and follwed further training on 198M multilingual tweets as described in the original [paper](https://arxiv.org/abs/2104.12250). Further it was trained on the training set of XNLI dataset in german which is a machine translated version of the MNLI dataset. It was trained on 5 epochs of the XNLI train set and evaluated on the XNLI eval dataset at the end of every epoch to find the best performing model. The model which had the highest accuracy on the eval set was chosen at the end. ![Training Charts from wandb](screen_wandb.png) - learning rate: 2e-5 - batch size: 32 - max sequence: length 128 using a GPU (NVIDIA GeForce RTX 3090) resulting in a training time of 1h 47 mins. ## Evaluation The best performing model was evaluatated on the XNLI test set to get a comparable result ``` predict_accuracy = 77.54 % ```
null
Non_BioNLP
# XLM-ROBERTA-BASE-XNLI-DE ## Model description This model takes the XLM-Roberta-base model which has been continued to pre-traine on a large corpus of Twitter in multiple languages. It was developed following a similar strategy as introduced as part of the [Tweet Eval](https://github.com/cardiffnlp/tweeteval) framework. The model is further finetuned on the german part of the XNLI training dataset. ## Intended Usage This model was developed to do Zero-Shot Text Classification in the realm of Hate Speech Detection. It is focused on the language of german as it was finetuned on data in said language. Since the base model was pre-trained on 100 different languages it has shown some effectiveness in other languages. Please refer to the list of languages in the [XLM Roberta paper](https://arxiv.org/abs/1911.02116) ### Usage with Zero-Shot Classification pipeline ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="morit/german_xlm_xnli") ``` After loading the model you can classify sequences in the languages mentioned above. You can specify your sequences and a matching hypothesis to be able to classify your proposed candidate labels. ```python sequence_to_classify = "Ich glaube Olaf Scholz wird sich im Bundestag durchsetzen." # we can specify candidate labels and hypothesis: candidate_labels = ["Politik", "Sport"] hypothesis_template = "Dieses Beispiel ist {}" # classify using the information provided classifier(sequence_to_classify, candidate_labels, hypothesis_template=hypothesis_template) # Output # {'sequence': 'Ich glaube Olaf Scholz wird sich im Bundestag durchsetzen.', # 'labels': ['Politik', 'Sport'], # 'scores': [0.6338292956352234, 0.3661706745624542]} ``` ## Training This model was pre-trained on a set of 100 languages and follwed further training on 198M multilingual tweets as described in the original [paper](https://arxiv.org/abs/2104.12250). Further it was trained on the training set of XNLI dataset in german which is a machine translated version of the MNLI dataset. It was trained on 5 epochs of the XNLI train set and evaluated on the XNLI eval dataset at the end of every epoch to find the best performing model. The model which had the highest accuracy on the eval set was chosen at the end. ![Training Charts from wandb](screen_wandb.png) - learning rate: 2e-5 - batch size: 32 - max sequence: length 128 using a GPU (NVIDIA GeForce RTX 3090) resulting in a training time of 1h 47 mins. ## Evaluation The best performing model was evaluatated on the XNLI test set to get a comparable result ``` predict_accuracy = 77.54 % ```
{"datasets": ["xnli"], "language": ["de"], "license": "mit", "metrics": ["accuracy"], "pipeline_tag": "zero-shot-classification"}
task
[ "TEXT_CLASSIFICATION" ]
46,005
sahithi001/finetuning-emotion-model
sahithi001
text-classification
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-02-09T16:44:10Z
2024-02-12T04:15:06+00:00
6
0
--- base_model: distilbert-base-uncased datasets: - emotion license: apache-2.0 metrics: - f1 - accuracy tags: - generated_from_trainer model-index: - name: finetuning-emotion-model results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - type: f1 value: 0.9235465717088435 name: F1 - type: accuracy value: 0.9235 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-emotion-model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2150 - F1: 0.9235 - Accuracy: 0.9235 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:--------:| | No log | 1.0 | 250 | 0.3045 | 0.9059 | 0.9065 | | 0.5356 | 2.0 | 500 | 0.2150 | 0.9235 | 0.9235 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-emotion-model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2150 - F1: 0.9235 - Accuracy: 0.9235 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:--------:| | No log | 1.0 | 250 | 0.3045 | 0.9059 | 0.9065 | | 0.5356 | 2.0 | 500 | 0.2150 | 0.9235 | 0.9235 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.1
{"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["f1", "accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "finetuning-emotion-model", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "f1", "value": 0.9235465717088435, "name": "F1"}, {"type": "accuracy", "value": 0.9235, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,006
fshaik8/SciNLI-4-llama-2-7b
fshaik8
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "nlp", "finetuned", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-07-28T18:42:38Z
2024-07-28T19:23:40+00:00
28
1
--- library_name: transformers tags: - text-generation - nlp - finetuned --- # Model Card for Model ID This model is a fine-tuned version of Llama-2-7b for sentence category classification using the SciNLI dataset. It has been trained to classify scientific sentences into categories like Contrasting, Reasoning, Entailment, and Neutral. ## Model Details ### Model Description This model is designed for Natural Language Inference (NLI) on scientific texts. It has been fine-tuned on the SciNLI dataset, which consists of sentence pairs extracted from scholarly papers on NLP and computational linguistics. The model helps in recognizing the semantic relationship between pairs of sentences in scientific texts. - **Developed by:** [Firoz Shaik] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [Causal Language Model] - **Language(s) (NLP):** [English] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [Llama-2-7b] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use The model can be directly used for sentence category classification tasks in scientific literature. ### Downstream Use [optional] The model can be integrated into larger NLP pipelines for tasks like scientific text summarization, question answering, and commonsense reasoning. ### Out-of-Scope Use The model is not intended for general-purpose text classification outside the scientific domain. It should not be used for generating text that requires deep factual correctness without further validation. ## Bias, Risks, and Limitations The model may inherit biases present in the SciNLI dataset. It might not perform well on non-scientific texts or texts from domains not represented in the training data. ### Recommendations Users should be aware of the potential biases and limitations. It is recommended to validate the model's predictions, especially for critical applications. ## How to Get Started with the Model ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("path_to_model") model = AutoModelForCausalLM.from_pretrained("path_to_model") inputs = tokenizer("Your input text", return_tensors="pt") outputs = model(**inputs) [More Information Needed] ## Training Details ### Training Data The dataset can be downloaded from https://shorturl.at/gQKY6. ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] #### Training Hyperparameters Training Hyperparameters Training regime: bf16 mixed precision Learning rate: 2e-4 Batch size: 1 Gradient accumulation steps: 4 Warmup steps: 2 Max steps: 20 Optimizer: paged_adamw_8bit Evaluation strategy: steps Evaluation steps: 1 #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data The model was evaluated on the SciNLI test set, containing sentence pairs from scientific papers. #### Factors [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
null
Non_BioNLP
# Model Card for Model ID This model is a fine-tuned version of Llama-2-7b for sentence category classification using the SciNLI dataset. It has been trained to classify scientific sentences into categories like Contrasting, Reasoning, Entailment, and Neutral. ## Model Details ### Model Description This model is designed for Natural Language Inference (NLI) on scientific texts. It has been fine-tuned on the SciNLI dataset, which consists of sentence pairs extracted from scholarly papers on NLP and computational linguistics. The model helps in recognizing the semantic relationship between pairs of sentences in scientific texts. - **Developed by:** [Firoz Shaik] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [Causal Language Model] - **Language(s) (NLP):** [English] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [Llama-2-7b] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use The model can be directly used for sentence category classification tasks in scientific literature. ### Downstream Use [optional] The model can be integrated into larger NLP pipelines for tasks like scientific text summarization, question answering, and commonsense reasoning. ### Out-of-Scope Use The model is not intended for general-purpose text classification outside the scientific domain. It should not be used for generating text that requires deep factual correctness without further validation. ## Bias, Risks, and Limitations The model may inherit biases present in the SciNLI dataset. It might not perform well on non-scientific texts or texts from domains not represented in the training data. ### Recommendations Users should be aware of the potential biases and limitations. It is recommended to validate the model's predictions, especially for critical applications. ## How to Get Started with the Model ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("path_to_model") model = AutoModelForCausalLM.from_pretrained("path_to_model") inputs = tokenizer("Your input text", return_tensors="pt") outputs = model(**inputs) [More Information Needed] ## Training Details ### Training Data The dataset can be downloaded from https://shorturl.at/gQKY6. ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] #### Training Hyperparameters Training Hyperparameters Training regime: bf16 mixed precision Learning rate: 2e-4 Batch size: 1 Gradient accumulation steps: 4 Warmup steps: 2 Max steps: 20 Optimizer: paged_adamw_8bit Evaluation strategy: steps Evaluation steps: 1 #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data The model was evaluated on the SciNLI test set, containing sentence pairs from scientific papers. #### Factors [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["text-generation", "nlp", "finetuned"]}
task
[ "TEXT_CLASSIFICATION", "QUESTION_ANSWERING", "SUMMARIZATION" ]
46,007
land25/distilbert-base-uncased_emotion_ft_0517
land25
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-05-17T16:04:08Z
2023-05-17T16:26:03+00:00
10
0
--- datasets: - emotion license: apache-2.0 metrics: - accuracy - f1 - precision tags: - generated_from_trainer model-index: - name: distilbert-base-uncased_emotion_ft_0517 results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - type: accuracy value: 0.9345 name: Accuracy - type: f1 value: 0.9346851141275695 name: F1 - type: precision value: 0.9087842847016905 name: Precision --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased_emotion_ft_0517 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1479 - Accuracy: 0.9345 - F1: 0.9347 - Precision: 0.9088 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:| | 0.7913 | 1.0 | 250 | 0.2689 | 0.918 | 0.9162 | 0.9016 | | 0.2142 | 2.0 | 500 | 0.1764 | 0.929 | 0.9290 | 0.9109 | | 0.1415 | 3.0 | 750 | 0.1541 | 0.934 | 0.9345 | 0.8995 | | 0.1128 | 4.0 | 1000 | 0.1479 | 0.9345 | 0.9347 | 0.9088 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased_emotion_ft_0517 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1479 - Accuracy: 0.9345 - F1: 0.9347 - Precision: 0.9088 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:| | 0.7913 | 1.0 | 250 | 0.2689 | 0.918 | 0.9162 | 0.9016 | | 0.2142 | 2.0 | 500 | 0.1764 | 0.929 | 0.9290 | 0.9109 | | 0.1415 | 3.0 | 750 | 0.1541 | 0.934 | 0.9345 | 0.8995 | | 0.1128 | 4.0 | 1000 | 0.1479 | 0.9345 | 0.9347 | 0.9088 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
{"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1", "precision"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased_emotion_ft_0517", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.9345, "name": "Accuracy"}, {"type": "f1", "value": 0.9346851141275695, "name": "F1"}, {"type": "precision", "value": 0.9087842847016905, "name": "Precision"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,008
LKriesch/TwinTransitionMapper_AI
LKriesch
text-classification
[ "setfit", "safetensors", "xlm-roberta", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:intfloat/multilingual-e5-large", "base_model:finetune:intfloat/multilingual-e5-large", "region:us" ]
2024-08-28T08:50:03Z
2024-09-02T06:37:59+00:00
13
0
--- base_model: intfloat/multilingual-e5-large library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: [] inference: true --- # TwinTransitionMapper_AI This repository contains the model for our paper entitled [Not all twins are identical: the digital layer of “twin” transition market applications](https://drive.google.com/file/d/1MN0GSl1FExHYkDyN_VhEt8yFwMX1MM4x/view?usp=drive_link) which is under review in Regional Studies (https://www.tandfonline.com/journals/cres20). This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained on paragraphs from German company websites using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. The model is designed to predict the AI capabilities of German companies based on their website texts. It is intended to be used in conjunction with the [Twin_Transition_Mapper_Green model] (https://huggingface.co/LKriesch/TwinTransitionMapper_Green) to identify companies contributing to the twin transition in Germany. For detailed information on the fine-tuning process and the results of these models, please refer to the [paper](https://drive.google.com/file/d/1MN0GSl1FExHYkDyN_VhEt8yFwMX1MM4x/view?usp=drive_link). ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens <!-- - **Number of Classes:** Unknown --> <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("LKriesch/TwinTransitionMapper_AI") # Run inference preds = model("I loved the spiderman movie!") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Framework Versions - Python: 3.9.19 - SetFit: 1.0.3 - Sentence Transformers: 3.0.1 - Transformers: 4.44.0 - PyTorch: 2.4.0+cu124 - Datasets: 2.16.1 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# TwinTransitionMapper_AI This repository contains the model for our paper entitled [Not all twins are identical: the digital layer of “twin” transition market applications](https://drive.google.com/file/d/1MN0GSl1FExHYkDyN_VhEt8yFwMX1MM4x/view?usp=drive_link) which is under review in Regional Studies (https://www.tandfonline.com/journals/cres20). This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained on paragraphs from German company websites using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. The model is designed to predict the AI capabilities of German companies based on their website texts. It is intended to be used in conjunction with the [Twin_Transition_Mapper_Green model] (https://huggingface.co/LKriesch/TwinTransitionMapper_Green) to identify companies contributing to the twin transition in Germany. For detailed information on the fine-tuning process and the results of these models, please refer to the [paper](https://drive.google.com/file/d/1MN0GSl1FExHYkDyN_VhEt8yFwMX1MM4x/view?usp=drive_link). ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens <!-- - **Number of Classes:** Unknown --> <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("LKriesch/TwinTransitionMapper_AI") # Run inference preds = model("I loved the spiderman movie!") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Framework Versions - Python: 3.9.19 - SetFit: 1.0.3 - Sentence Transformers: 3.0.1 - Transformers: 4.44.0 - PyTorch: 2.4.0+cu124 - Datasets: 2.16.1 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "intfloat/multilingual-e5-large", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [], "inference": true}
task
[ "TEXT_CLASSIFICATION" ]
46,009
sarahai/nllb-ru-uz
sarahai
translation
[ "transformers", "pytorch", "m2m_100", "text2text-generation", "translation", "nllb", "russian", "uzbek", "ru", "uz", "dataset:custom_dataset", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-03-25T05:46:24Z
2024-03-25T07:51:34+00:00
57
0
--- datasets: - custom_dataset language: - ru - uz library_name: transformers license: apache-2.0 metrics: - bleu tags: - translation - nllb - russian - uzbek widget: - text: src_lang="rus_Cyrl", tgt_lang="uzn_Latn" Не задавай глупых вопросов model-index: - name: NLLB-200-Distilled-600M Russian to Uzbek Translation results: - task: type: text2text-generation name: Text Translation dataset: name: Custom Dataset (10k parallel sentences) type: custom_dataset metrics: - type: bleu value: your_model's_bleu_score name: BLEU Score --- # Description in English # NLLB-200-Distilled-600M Russian to Uzbek Translation ## Model Description This model is a fine-tuned version of Meta's NLLB-200-Distilled-600M, tailored specifically for translating text from Russian to Uzbek. It leverages the powerful base of the NLLB framework, designed to provide high-quality translations across numerous languages, now with enhanced proficiency in the Russian to Uzbek language pair thanks to targeted fine-tuning. ### Python Code Example ```python from transformers import pipeline # Initialize the translation pipeline translator = pipeline("translation_ru_to_uz", model="sarahai/nllb-ru-uz") # Translate text from Russian to Uzbek russian_text = "Это пример предложения для перевода." translation = translator(russian_text) print("Translation:", translation[0]['translation_text']) ``` # Description in Russian # NLLB-200-Distilled-600M Перевод с русского на узбекский ## Описание Модели Эта модель является доработанной версией модели Meta NLLB-200-Distilled-600M, предназначенной специально для перевода текстов с русского на узбекский язык. Она использует мощную основу фреймворка NLLB, созданного для обеспечения качественного перевода между многими языками, теперь с улучшенной способностью в паре русский-узбекский благодаря целевой доработке. ### Пример кода на Python ```python from transformers import pipeline # Инициализация пайплайна перевода translator = pipeline("translation_ru_to_uz", model="sarahai/nllb-ru-uz") # Перевод текста с русского на узбекский russian_text = "Это пример предложения для перевода." translation = translator(russian_text) print("Перевод:", translation[0]['translation_text']) ``` # Планируется улучшение модели и совершенствование переводов путем тренировки на более объемном датасете.
null
Non_BioNLP
# Description in English # NLLB-200-Distilled-600M Russian to Uzbek Translation ## Model Description This model is a fine-tuned version of Meta's NLLB-200-Distilled-600M, tailored specifically for translating text from Russian to Uzbek. It leverages the powerful base of the NLLB framework, designed to provide high-quality translations across numerous languages, now with enhanced proficiency in the Russian to Uzbek language pair thanks to targeted fine-tuning. ### Python Code Example ```python from transformers import pipeline # Initialize the translation pipeline translator = pipeline("translation_ru_to_uz", model="sarahai/nllb-ru-uz") # Translate text from Russian to Uzbek russian_text = "Это пример предложения для перевода." translation = translator(russian_text) print("Translation:", translation[0]['translation_text']) ``` # Description in Russian # NLLB-200-Distilled-600M Перевод с русского на узбекский ## Описание Модели Эта модель является доработанной версией модели Meta NLLB-200-Distilled-600M, предназначенной специально для перевода текстов с русского на узбекский язык. Она использует мощную основу фреймворка NLLB, созданного для обеспечения качественного перевода между многими языками, теперь с улучшенной способностью в паре русский-узбекский благодаря целевой доработке. ### Пример кода на Python ```python from transformers import pipeline # Инициализация пайплайна перевода translator = pipeline("translation_ru_to_uz", model="sarahai/nllb-ru-uz") # Перевод текста с русского на узбекский russian_text = "Это пример предложения для перевода." translation = translator(russian_text) print("Перевод:", translation[0]['translation_text']) ``` # Планируется улучшение модели и совершенствование переводов путем тренировки на более объемном датасете.
{"datasets": ["custom_dataset"], "language": ["ru", "uz"], "library_name": "transformers", "license": "apache-2.0", "metrics": ["bleu"], "tags": ["translation", "nllb", "russian", "uzbek"], "widget": [{"text": "src_lang=\"rus_Cyrl\", tgt_lang=\"uzn_Latn\" Не задавай глупых вопросов"}], "model-index": [{"name": "NLLB-200-Distilled-600M Russian to Uzbek Translation", "results": [{"task": {"type": "text2text-generation", "name": "Text Translation"}, "dataset": {"name": "Custom Dataset (10k parallel sentences)", "type": "custom_dataset"}, "metrics": [{"type": "bleu", "value": "your_model's_bleu_score", "name": "BLEU Score"}]}]}]}
task
[ "TRANSLATION" ]
46,010
aryanmagoon/ms_marco_bi_encoder
aryanmagoon
sentence-similarity
[ "sentence-transformers", "safetensors", "distilbert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:676193", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-10-04T23:29:35Z
2024-10-04T23:29:47+00:00
4
0
--- base_model: distilbert/distilbert-base-uncased library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:676193 - loss:MultipleNegativesRankingLoss widget: - source_sentence: which type of tides have the largest range sentences: - 'Your BMI is based on your height and weight. It''s one way to see if you''re at a healthy weight. Underweight: Your BMI is less than 18.5. Healthy weight: Your BMI is 18.5 to 24.9. Overweight: Your BMI is 25 to 29.9. Obese: Your BMI is 30 or higher. The chart below shows examples of body mass indexes. The figure at which your height corresponds with your weight is your body mass index.' - 'For example, a slight color change in the test pad for protein may indicate a small amount of protein present in the urine whereas a deep color change may indicate a large amount. The most frequently performed chemical tests using reagent test strips are: 1 Specific gravity.' - When the moon is full or new, the gravitational pull of the moon and sun are combined. At these times, the high tides are very high and the low tides are very low. This is known as a spring high tide. Spring tides are especially strong tides (they do not have anything to do with the season Spring). They occur when the Earth, the Sun, and the Moon are in a line. The gravitational forces of the Moon and the Sun both contribute to the tides. Spring tides occur during the full moon and the new moon. - source_sentence: what is the mexican hat dance what are the moves sentences: - 'You’ve probably heard about the mis-selling of payment protection insurance, the “reclaim PPI” adverts, and the refunds people have received. Because of the high payouts, a lot of claims management companies have sprung up, trying to earn commissions from claiming refunds on behalf of their clients. ' - These symptoms could be signs of a bacterial infection, such as strep throat. Taking antibiotics won’t help at all if your sore throat is caused by viruses, but they’re essential for fighting bacterial infections like strep. Strep is the most common bacterial throat infection. Although it can occur in adults, strep throat is more common in children between ages 5 and 15. Riddle says strep can be harder to detect in younger children, because it can cause a runny nose and other symptoms that make it seem like a cold. Another fairly common throat infection is tonsillitis, which occurs when you have sore, swollen tonsils. It’s caused by many of the same viruses and bacteria that cause sore throats. If you have frequent bouts of tonsillitis or strep throat, you may need surgery (called a tonsillectomy) to have your tonsils removed. - 'Jarabe Tapatio (Mexican Hat Dance) -- April 2010. To learn the dance often considered the national dance of Mexico. To learn words from the Spanish language and facts about the country of Mexico. ' - source_sentence: where is murchison location sentences: - Share. The cerebral cortex is the layer of the brain often referred to as gray matter. The cortex (thin layer of tissue) is gray because nerves in this area lack the insulation that makes most other parts of the brain appear to be white. The cortex covers the outer portion (1.5mm to 5mm) of the cerebrum and cerebellum. The portion of the cortex that covers the cerebrum is called the cerebral cortex. The cerebral cortex consists of folded bulges called gyri that create deep furrows or fissures called sulci. - Murchison is a small riverside rural village located on the Goulburn River in Victoria, Australia. Murchison is located 167 kilometres from Melbourne and is just to the west of the Goulburn Valley Highway between Shepparton and Nagambie. The surrounding countryside contains orchards, vineyards and dairy farms and also HM Prison Dhurringile. At the 2011 census, Murchison had a population of 1,047 - Medicare beneficiary means an individual who is entitled to benefits under medicare part A plan and enrolled under medicare part B plan or enrolled in both medicare part A and part B plan and who resides in the U.S. Medicare beneficiaries pay deductibles and 20 percent coinsurance for most services and equipment. Whenever admitted to a hospital for a new spell of illness or benefit period, a beneficiary is entitled to another 90 days of Part A coverage. In addition, each Medicare beneficiary has a lifetime reserve of 60 days that the beneficiary may elect to use toward one or more hospital stays. 42 C.F.R. § 409.61 [a] [2]. However, if the beneficiary has elected to apply the 60 reserve days to a previous hospital stay, the lifetime reserve is exhausted - source_sentence: is hpv a std sentences: - 'HPV is the most common sexually transmitted infection (STI). HPV is a different virus than HIV and HSV (herpes). HPV is so common that nearly all sexually active men and women get it at some point in their lives. There are many different types of HPV. ' - Hibiscus plants reach a wide variety of heights due to the diversity of the species. Grown as annuals, perennials or shrubs, the height range includes dwarf varieties as well taller plants that grow up to 15 feet tall. Red leaf hibiscus (H. acetosella) is an annual tropical shrub that grows to a height of 5 feet and displays deep red leaves. Great rose mallow (Hibiscus grandiflorus) is a perennial species that displays light pink blooms at a height of 8 feet, according to the Clemson University Extension. Additionally, hollyhocks (Alcea rosea) often reach 8 feet in height and display flowers in vivid colors - Snake bites to people tend to be warning bites, and as such contain little venom. The most common venomous snake in the eastern states, copperheads are considered pit vipers, but unlike most other vipers, the copperhead does not flee when it is caught unawares. Instead, the snake will freeze in its current position. Of all the pit vipers, copperhead venom is the least toxic. Breeding does not take place every year, but a female snake will give birth to live young. Litters can consist of up to twenty young, though fewer than ten is most common. As with the majority of reptiles, the babies are on their own once they are born. - source_sentence: how long crocodile live without food sentences: - 'Copper is a chemical element with symbol Cu (from Latin: cuprum) and atomic number 29. It is a ductile metal with very high thermal and electrical conductivity. Pure copper is soft and malleable; a freshly exposed surface has a reddish-orange color. It is used as a conductor of heat and electricity, a building material, and a constituent of various metal alloys.' - Watercress, a slightly sweet and spicy green that you won’t find at every market, is an amazingly delicious green to enjoy when you get the chance. Reminiscent of arugula and spinach combined, you’ll find it often still with the roots attached or even sold in small water pots at stores like Whole Foods. The cruciferous veggies like watercress, kale, broccoli, cabbage, etc., all topped the list while other leafy greens such as spinach, romaine, and beet greens also ranked high on the list. - 'Share to: about 1 week actually, but most people say 2 weeks, but that is a long time if you think about it. New answer; People who deliberatley stop eating can go for about 2 weeks, an … d it tends to be skinny people who do this. You can go a long time without food but not even 2 days or so without water....' --- # SentenceTransformer based on distilbert/distilbert-base-uncased This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) <!-- at revision 12040accade4e8a0f71eabdb258fecc2e7e948be --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("aryanmagoon/ms_marco_bi_encoder") # Run inference sentences = [ 'how long crocodile live without food', 'Share to: about 1 week actually, but most people say 2 weeks, but that is a long time if you think about it. New answer; People who deliberatley stop eating can go for about 2 weeks, an … d it tends to be skinny people who do this. You can go a long time without food but not even 2 days or so without water....', 'Copper is a chemical element with symbol Cu (from Latin: cuprum) and atomic number 29. It is a ductile metal with very high thermal and electrical conductivity. Pure copper is soft and malleable; a freshly exposed surface has a reddish-orange color. It is used as a conductor of heat and electricity, a building material, and a constituent of various metal alloys.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 676,193 training samples * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence_0 | sentence_1 | label | |:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 4 tokens</li><li>mean: 9.15 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 96.96 tokens</li><li>max: 254 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.13</li><li>max: 1.0</li></ul> | * Samples: | sentence_0 | sentence_1 | label | |:--------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------| | <code>what airport is closest to rinteln germany</code> | <code>What is the closest airport to Berlin, Germany? The closest international and non-international airports to Berlin, Germany are listed below in order of increasing distance.</code> | <code>0.0</code> | | <code>what is javaone</code> | <code>JavaOne™ coffee pods are individually engineered with the precise roast level, grind setting, blending and dosage to achieve the best tasting pods. Starting with only the finest quality Arabica coffee beans, we roast our beans using hot air for a consistent, even roast throughout the entire bean. While traditional drum roasting can overcook the outside of the bean and undercook the inside, our beans are evenly roasted for a smoother, richer taste.</code> | <code>0.0</code> | | <code>what does watercress taste like</code> | <code>Watercress, a slightly sweet and spicy green that you won’t find at every market, is an amazingly delicious green to enjoy when you get the chance. Reminiscent of arugula and spinach combined, you’ll find it often still with the roots attached or even sold in small water pots at stores like Whole Foods. The cruciferous veggies like watercress, kale, broccoli, cabbage, etc., all topped the list while other leafy greens such as spinach, romaine, and beet greens also ranked high on the list.</code> | <code>0.0</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 3 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: True - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin </details> ### Training Logs | Epoch | Step | Training Loss | |:------:|:----:|:-------------:| | 0.1893 | 500 | 1.2126 | | 0.3786 | 1000 | 0.2246 | | 0.5680 | 1500 | 0.1542 | | 0.7573 | 2000 | 0.1332 | | 0.9466 | 2500 | 0.115 | | 1.1359 | 3000 | 0.1025 | | 1.3253 | 3500 | 0.0929 | | 1.5146 | 4000 | 0.081 | | 1.7039 | 4500 | 0.074 | | 1.8932 | 5000 | 0.0669 | | 2.0825 | 5500 | 0.0605 | | 2.2719 | 6000 | 0.0563 | | 2.4612 | 6500 | 0.047 | | 2.6505 | 7000 | 0.0433 | | 2.8398 | 7500 | 0.0391 | ### Framework Versions - Python: 3.10.14 - Sentence Transformers: 3.1.1 - Transformers: 4.45.1 - PyTorch: 2.4.1+cu121 - Accelerate: 0.34.2 - Datasets: 3.0.1 - Tokenizers: 0.20.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SentenceTransformer based on distilbert/distilbert-base-uncased This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) <!-- at revision 12040accade4e8a0f71eabdb258fecc2e7e948be --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("aryanmagoon/ms_marco_bi_encoder") # Run inference sentences = [ 'how long crocodile live without food', 'Share to: about 1 week actually, but most people say 2 weeks, but that is a long time if you think about it. New answer; People who deliberatley stop eating can go for about 2 weeks, an … d it tends to be skinny people who do this. You can go a long time without food but not even 2 days or so without water....', 'Copper is a chemical element with symbol Cu (from Latin: cuprum) and atomic number 29. It is a ductile metal with very high thermal and electrical conductivity. Pure copper is soft and malleable; a freshly exposed surface has a reddish-orange color. It is used as a conductor of heat and electricity, a building material, and a constituent of various metal alloys.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 676,193 training samples * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence_0 | sentence_1 | label | |:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 4 tokens</li><li>mean: 9.15 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 96.96 tokens</li><li>max: 254 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.13</li><li>max: 1.0</li></ul> | * Samples: | sentence_0 | sentence_1 | label | |:--------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------| | <code>what airport is closest to rinteln germany</code> | <code>What is the closest airport to Berlin, Germany? The closest international and non-international airports to Berlin, Germany are listed below in order of increasing distance.</code> | <code>0.0</code> | | <code>what is javaone</code> | <code>JavaOne™ coffee pods are individually engineered with the precise roast level, grind setting, blending and dosage to achieve the best tasting pods. Starting with only the finest quality Arabica coffee beans, we roast our beans using hot air for a consistent, even roast throughout the entire bean. While traditional drum roasting can overcook the outside of the bean and undercook the inside, our beans are evenly roasted for a smoother, richer taste.</code> | <code>0.0</code> | | <code>what does watercress taste like</code> | <code>Watercress, a slightly sweet and spicy green that you won’t find at every market, is an amazingly delicious green to enjoy when you get the chance. Reminiscent of arugula and spinach combined, you’ll find it often still with the roots attached or even sold in small water pots at stores like Whole Foods. The cruciferous veggies like watercress, kale, broccoli, cabbage, etc., all topped the list while other leafy greens such as spinach, romaine, and beet greens also ranked high on the list.</code> | <code>0.0</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 3 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: True - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin </details> ### Training Logs | Epoch | Step | Training Loss | |:------:|:----:|:-------------:| | 0.1893 | 500 | 1.2126 | | 0.3786 | 1000 | 0.2246 | | 0.5680 | 1500 | 0.1542 | | 0.7573 | 2000 | 0.1332 | | 0.9466 | 2500 | 0.115 | | 1.1359 | 3000 | 0.1025 | | 1.3253 | 3500 | 0.0929 | | 1.5146 | 4000 | 0.081 | | 1.7039 | 4500 | 0.074 | | 1.8932 | 5000 | 0.0669 | | 2.0825 | 5500 | 0.0605 | | 2.2719 | 6000 | 0.0563 | | 2.4612 | 6500 | 0.047 | | 2.6505 | 7000 | 0.0433 | | 2.8398 | 7500 | 0.0391 | ### Framework Versions - Python: 3.10.14 - Sentence Transformers: 3.1.1 - Transformers: 4.45.1 - PyTorch: 2.4.1+cu121 - Accelerate: 0.34.2 - Datasets: 3.0.1 - Tokenizers: 0.20.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "distilbert/distilbert-base-uncased", "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:676193", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "which type of tides have the largest range", "sentences": ["Your BMI is based on your height and weight. It's one way to see if you're at a healthy weight. Underweight: Your BMI is less than 18.5. Healthy weight: Your BMI is 18.5 to 24.9. Overweight: Your BMI is 25 to 29.9. Obese: Your BMI is 30 or higher. The chart below shows examples of body mass indexes. The figure at which your height corresponds with your weight is your body mass index.", "For example, a slight color change in the test pad for protein may indicate a small amount of protein present in the urine whereas a deep color change may indicate a large amount. The most frequently performed chemical tests using reagent test strips are: 1 Specific gravity.", "When the moon is full or new, the gravitational pull of the moon and sun are combined. At these times, the high tides are very high and the low tides are very low. This is known as a spring high tide. Spring tides are especially strong tides (they do not have anything to do with the season Spring). They occur when the Earth, the Sun, and the Moon are in a line. The gravitational forces of the Moon and the Sun both contribute to the tides. Spring tides occur during the full moon and the new moon."]}, {"source_sentence": "what is the mexican hat dance what are the moves", "sentences": ["You’ve probably heard about the mis-selling of payment protection insurance, the “reclaim PPI” adverts, and the refunds people have received. Because of the high payouts, a lot of claims management companies have sprung up, trying to earn commissions from claiming refunds on behalf of their clients. ", "These symptoms could be signs of a bacterial infection, such as strep throat. Taking antibiotics won’t help at all if your sore throat is caused by viruses, but they’re essential for fighting bacterial infections like strep. Strep is the most common bacterial throat infection. Although it can occur in adults, strep throat is more common in children between ages 5 and 15. Riddle says strep can be harder to detect in younger children, because it can cause a runny nose and other symptoms that make it seem like a cold. Another fairly common throat infection is tonsillitis, which occurs when you have sore, swollen tonsils. It’s caused by many of the same viruses and bacteria that cause sore throats. If you have frequent bouts of tonsillitis or strep throat, you may need surgery (called a tonsillectomy) to have your tonsils removed.", "Jarabe Tapatio (Mexican Hat Dance) -- April 2010. To learn the dance often considered the national dance of Mexico. To learn words from the Spanish language and facts about the country of Mexico. "]}, {"source_sentence": "where is murchison location", "sentences": ["Share. The cerebral cortex is the layer of the brain often referred to as gray matter. The cortex (thin layer of tissue) is gray because nerves in this area lack the insulation that makes most other parts of the brain appear to be white. The cortex covers the outer portion (1.5mm to 5mm) of the cerebrum and cerebellum. The portion of the cortex that covers the cerebrum is called the cerebral cortex. The cerebral cortex consists of folded bulges called gyri that create deep furrows or fissures called sulci.", "Murchison is a small riverside rural village located on the Goulburn River in Victoria, Australia. Murchison is located 167 kilometres from Melbourne and is just to the west of the Goulburn Valley Highway between Shepparton and Nagambie. The surrounding countryside contains orchards, vineyards and dairy farms and also HM Prison Dhurringile. At the 2011 census, Murchison had a population of 1,047", "Medicare beneficiary means an individual who is entitled to benefits under medicare part A plan and enrolled under medicare part B plan or enrolled in both medicare part A and part B plan and who resides in the U.S. Medicare beneficiaries pay deductibles and 20 percent coinsurance for most services and equipment. Whenever admitted to a hospital for a new spell of illness or benefit period, a beneficiary is entitled to another 90 days of Part A coverage. In addition, each Medicare beneficiary has a lifetime reserve of 60 days that the beneficiary may elect to use toward one or more hospital stays. 42 C.F.R. § 409.61 [a] [2]. However, if the beneficiary has elected to apply the 60 reserve days to a previous hospital stay, the lifetime reserve is exhausted"]}, {"source_sentence": "is hpv a std", "sentences": ["HPV is the most common sexually transmitted infection (STI). HPV is a different virus than HIV and HSV (herpes). HPV is so common that nearly all sexually active men and women get it at some point in their lives. There are many different types of HPV. ", "Hibiscus plants reach a wide variety of heights due to the diversity of the species. Grown as annuals, perennials or shrubs, the height range includes dwarf varieties as well taller plants that grow up to 15 feet tall. Red leaf hibiscus (H. acetosella) is an annual tropical shrub that grows to a height of 5 feet and displays deep red leaves. Great rose mallow (Hibiscus grandiflorus) is a perennial species that displays light pink blooms at a height of 8 feet, according to the Clemson University Extension. Additionally, hollyhocks (Alcea rosea) often reach 8 feet in height and display flowers in vivid colors", "Snake bites to people tend to be warning bites, and as such contain little venom. The most common venomous snake in the eastern states, copperheads are considered pit vipers, but unlike most other vipers, the copperhead does not flee when it is caught unawares. Instead, the snake will freeze in its current position. Of all the pit vipers, copperhead venom is the least toxic. Breeding does not take place every year, but a female snake will give birth to live young. Litters can consist of up to twenty young, though fewer than ten is most common. As with the majority of reptiles, the babies are on their own once they are born."]}, {"source_sentence": "how long crocodile live without food", "sentences": ["Copper is a chemical element with symbol Cu (from Latin: cuprum) and atomic number 29. It is a ductile metal with very high thermal and electrical conductivity. Pure copper is soft and malleable; a freshly exposed surface has a reddish-orange color. It is used as a conductor of heat and electricity, a building material, and a constituent of various metal alloys.", "Watercress, a slightly sweet and spicy green that you won’t find at every market, is an amazingly delicious green to enjoy when you get the chance. Reminiscent of arugula and spinach combined, you’ll find it often still with the roots attached or even sold in small water pots at stores like Whole Foods. The cruciferous veggies like watercress, kale, broccoli, cabbage, etc., all topped the list while other leafy greens such as spinach, romaine, and beet greens also ranked high on the list.", "Share to: about 1 week actually, but most people say 2 weeks, but that is a long time if you think about it. New answer; People who deliberatley stop eating can go for about 2 weeks, an … d it tends to be skinny people who do this. You can go a long time without food but not even 2 days or so without water...."]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,011
morenolq/BART-IT-LSG-16384
morenolq
text2text-generation
[ "transformers", "pytorch", "bart", "text2text-generation", "summarization", "custom_code", "it", "dataset:joelniklaus/Multi_Legal_Pile", "base_model:morenolq/bart-it", "base_model:finetune:morenolq/bart-it", "license:mit", "autotrain_compatible", "region:us" ]
2025-01-30T14:07:24Z
2025-02-26T06:24:30+00:00
25
0
--- base_model: - morenolq/bart-it datasets: - joelniklaus/Multi_Legal_Pile language: - it library_name: transformers license: mit pipeline_tag: text2text-generation tags: - text2text-generation - summarization widget: - text: '<mask> 1234: Il contratto si intende concluso quando...' --- # 📌 Model Card: LEGIT-BART Series ## 🏛️ Model Overview The **LEGIT-BART** models are a family of **pre-trained transformer-based models** for **Italian legal text processing**. They build upon **BART-IT** ([`morenolq/bart-it`](https://huggingface.co/morenolq/bart-it)) and are further pre-trained on **Italian legal corpora**. 💡 Key features: - **Extended context length** with **Local-Sparse-Global (LSG) Attention** (up to **16,384 tokens**) 📜 - **Trained on legal documents** such as **statutes, case law, and contracts** 📑 - **Not fine-tuned for specific tasks** (requires further adaptation) ⚠️ This specific model is pre-trained on general-purpose Italian text! Please select the best model from the table below. ## 📂 Available Models | Model | Description | Link | |--------|-------------|------| | **LEGIT-BART** | Continued pre-training of `morenolq/bart-it` on **Italian legal texts** | [🔗 Link](https://huggingface.co/morenolq/LEGIT-BART) | | **LEGIT-BART-LSG-4096** | Continued pre-training of `morenolq/bart-it`, supporting **4,096 tokens** | [🔗 Link](https://huggingface.co/morenolq/LEGIT-BART-LSG-4096) | | **LEGIT-BART-LSG-16384** | Continued pre-training of `morenolq/bart-it`, supporting **16,384 tokens** | [🔗 Link](https://huggingface.co/morenolq/LEGIT-BART-LSG-16384) | | **LEGIT-SCRATCH-BART** | Trained from scratch on **Italian legal texts** | [🔗 Link](https://huggingface.co/morenolq/LEGIT-SCRATCH-BART) | | **LEGIT-SCRATCH-BART-LSG-4096** | Trained from scratch with **LSG attention**, supporting **4,096 tokens** | [🔗 Link](https://huggingface.co/morenolq/LEGIT-SCRATCH-BART-LSG-4096) | | **LEGIT-SCRATCH-BART-LSG-16384** | Trained from scratch with **LSG attention**, supporting **16,384 tokens** | [🔗 Link](https://huggingface.co/morenolq/LEGIT-SCRATCH-BART-LSG-16384) | | **BART-IT-LSG-4096** | `morenolq/bart-it` with **LSG attention**, supporting **4,096 tokens** (⚠️ no legal adaptation) | [🔗 Link](https://huggingface.co/morenolq/BART-IT-LSG-4096) | **BART-IT-LSG-16384** | `morenolq/bart-it` with **LSG attention**, supporting **16,384 tokens** (⚠️ no legal adaptation) | [🔗 Link](https://huggingface.co/morenolq/BART-IT-LSG-16384) | --- ## 🛠️ Model Details 🔹 **Architecture** - Base Model: [`morenolq/bart-it`](https://huggingface.co/morenolq/bart-it) - Transformer Encoder-Decoder - **LSG Attention** for long documents - Specific tokenizers for models trained from scratch (underperforming continual pre-training in our experiments). 🔹 **Training Data** - Dataset: [`joelniklaus/Multi_Legal_Pile`](https://huggingface.co/datasets/joelniklaus/Multi_Legal_Pile) - Types of legal texts used: - **Legislation** (laws, codes, amendments) - **Case law** (judicial decisions) - **Contracts** (public legal agreements) --- ## 🚀 How to Use ```python from transformers import BartForConditionalGeneration, AutoTokenizer # Load tokenizer and model model_name = "morenolq/BART-IT-LSG-16384" tokenizer = AutoTokenizer.from_pretrained(model_name) model = BartForConditionalGeneration.from_pretrained(model_name) # Example input input_text = "<mask> 1234: Il contratto si intende concluso quando..." inputs = tokenizer(input_text, return_tensors="pt", max_length=16384, truncation=True) # Generate summary summary_ids = model.generate(inputs.input_ids, max_length=150, num_beams=4, early_stopping=True) summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print("📝 Summary:", summary) ``` --- ⚠️ Limitations & Ethical Considerations - **Not fine-tuned for specific tasks**: The models are pre-trained on legal texts and may require further adaptation for specific legal NLP tasks (e.g., summarization, question-answering). - **Bias and fairness**: Legal texts may contain biases present in the legal system. Care should be taken to ensure fairness and ethical use of the models. - **Legal advice**: The models are not a substitute for professional legal advice. Always consult a qualified legal professional for legal matters. --- ## 📚 Reference The paper presenting LEGIT-BART models is currently under review and will be updated here once published. ```bibtex @article{benedetto2025legitbart, title = {LegItBART: a summarization model for Italian legal documents}, author = {Benedetto, Irene and La Quatra, Moreno and Cagliero, Luca}, year = 2025, journal = {Artificial Intelligence and Law}, publisher = {Springer}, pages = {1--31}, doi = {10.1007/s10506-025-09436-y}, url = {doi.org/10.1007/s10506-025-09436-y} } ``` ---
null
Non_BioNLP
# 📌 Model Card: LEGIT-BART Series ## 🏛️ Model Overview The **LEGIT-BART** models are a family of **pre-trained transformer-based models** for **Italian legal text processing**. They build upon **BART-IT** ([`morenolq/bart-it`](https://huggingface.co/morenolq/bart-it)) and are further pre-trained on **Italian legal corpora**. 💡 Key features: - **Extended context length** with **Local-Sparse-Global (LSG) Attention** (up to **16,384 tokens**) 📜 - **Trained on legal documents** such as **statutes, case law, and contracts** 📑 - **Not fine-tuned for specific tasks** (requires further adaptation) ⚠️ This specific model is pre-trained on general-purpose Italian text! Please select the best model from the table below. ## 📂 Available Models | Model | Description | Link | |--------|-------------|------| | **LEGIT-BART** | Continued pre-training of `morenolq/bart-it` on **Italian legal texts** | [🔗 Link](https://huggingface.co/morenolq/LEGIT-BART) | | **LEGIT-BART-LSG-4096** | Continued pre-training of `morenolq/bart-it`, supporting **4,096 tokens** | [🔗 Link](https://huggingface.co/morenolq/LEGIT-BART-LSG-4096) | | **LEGIT-BART-LSG-16384** | Continued pre-training of `morenolq/bart-it`, supporting **16,384 tokens** | [🔗 Link](https://huggingface.co/morenolq/LEGIT-BART-LSG-16384) | | **LEGIT-SCRATCH-BART** | Trained from scratch on **Italian legal texts** | [🔗 Link](https://huggingface.co/morenolq/LEGIT-SCRATCH-BART) | | **LEGIT-SCRATCH-BART-LSG-4096** | Trained from scratch with **LSG attention**, supporting **4,096 tokens** | [🔗 Link](https://huggingface.co/morenolq/LEGIT-SCRATCH-BART-LSG-4096) | | **LEGIT-SCRATCH-BART-LSG-16384** | Trained from scratch with **LSG attention**, supporting **16,384 tokens** | [🔗 Link](https://huggingface.co/morenolq/LEGIT-SCRATCH-BART-LSG-16384) | | **BART-IT-LSG-4096** | `morenolq/bart-it` with **LSG attention**, supporting **4,096 tokens** (⚠️ no legal adaptation) | [🔗 Link](https://huggingface.co/morenolq/BART-IT-LSG-4096) | **BART-IT-LSG-16384** | `morenolq/bart-it` with **LSG attention**, supporting **16,384 tokens** (⚠️ no legal adaptation) | [🔗 Link](https://huggingface.co/morenolq/BART-IT-LSG-16384) | --- ## 🛠️ Model Details 🔹 **Architecture** - Base Model: [`morenolq/bart-it`](https://huggingface.co/morenolq/bart-it) - Transformer Encoder-Decoder - **LSG Attention** for long documents - Specific tokenizers for models trained from scratch (underperforming continual pre-training in our experiments). 🔹 **Training Data** - Dataset: [`joelniklaus/Multi_Legal_Pile`](https://huggingface.co/datasets/joelniklaus/Multi_Legal_Pile) - Types of legal texts used: - **Legislation** (laws, codes, amendments) - **Case law** (judicial decisions) - **Contracts** (public legal agreements) --- ## 🚀 How to Use ```python from transformers import BartForConditionalGeneration, AutoTokenizer # Load tokenizer and model model_name = "morenolq/BART-IT-LSG-16384" tokenizer = AutoTokenizer.from_pretrained(model_name) model = BartForConditionalGeneration.from_pretrained(model_name) # Example input input_text = "<mask> 1234: Il contratto si intende concluso quando..." inputs = tokenizer(input_text, return_tensors="pt", max_length=16384, truncation=True) # Generate summary summary_ids = model.generate(inputs.input_ids, max_length=150, num_beams=4, early_stopping=True) summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print("📝 Summary:", summary) ``` --- ⚠️ Limitations & Ethical Considerations - **Not fine-tuned for specific tasks**: The models are pre-trained on legal texts and may require further adaptation for specific legal NLP tasks (e.g., summarization, question-answering). - **Bias and fairness**: Legal texts may contain biases present in the legal system. Care should be taken to ensure fairness and ethical use of the models. - **Legal advice**: The models are not a substitute for professional legal advice. Always consult a qualified legal professional for legal matters. --- ## 📚 Reference The paper presenting LEGIT-BART models is currently under review and will be updated here once published. ```bibtex @article{benedetto2025legitbart, title = {LegItBART: a summarization model for Italian legal documents}, author = {Benedetto, Irene and La Quatra, Moreno and Cagliero, Luca}, year = 2025, journal = {Artificial Intelligence and Law}, publisher = {Springer}, pages = {1--31}, doi = {10.1007/s10506-025-09436-y}, url = {doi.org/10.1007/s10506-025-09436-y} } ``` ---
{"base_model": ["morenolq/bart-it"], "datasets": ["joelniklaus/Multi_Legal_Pile"], "language": ["it"], "library_name": "transformers", "license": "mit", "pipeline_tag": "text2text-generation", "tags": ["text2text-generation", "summarization"], "widget": [{"text": "<mask> 1234: Il contratto si intende concluso quando..."}]}
task
[ "SUMMARIZATION" ]
46,012
hopkins/eng-deu-common
hopkins
translation
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-07-06T16:14:33Z
2023-07-06T16:32:47+00:00
8
0
--- metrics: - bleu tags: - translation - generated_from_trainer model-index: - name: eng-deu-common results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-deu-common This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6565 - Bleu: 21.1959 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-deu-common This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6565 - Bleu: 21.1959 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
{"metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "eng-deu-common", "results": []}]}
task
[ "TRANSLATION" ]
46,013
facebook/seamless-m4t-unity-small-s2t
facebook
null
[ "fairseq2", "SeamlessM4T", "license:cc-by-nc-4.0", "region:us" ]
2023-08-20T15:40:49Z
2023-08-24T13:35:59+00:00
0
30
--- library_name: fairseq2 license: cc-by-nc-4.0 tags: - SeamlessM4T inference: false --- # SeamlessM4T - On-Device SeamlessM4T is designed to provide high quality translation, allowing people from different linguistic communities to communicate effortlessly through speech and text. SeamlessM4T covers: - 📥 101 languages for speech input - ⌨️ 96 Languages for text input/output - 🗣️ 35 languages for speech output. Apart from [SeamlessM4T-LARGE (2.3B)](https://huggingface.co/facebook/seamless-m4t-large) and [SeamlessM4T-MEDIUM (1.2B)](https://huggingface.co/facebook/seamless-m4t-medium) models, we are also developing a small model (281M) targeting for on-device inference. This README contains an example to run an exported small model covering most tasks (ASR/S2TT/S2ST). The model could be executed on popular mobile devices with Pytorch Mobile (https://pytorch.org/mobile/home/). ## Overview | Model | Checkpoint | Num Params | Disk Size | Supported Tasks | Supported Languages| |---------|------------|----------|-------------|------------|-------------------------| | UnitY-Small|[🤗 Model card](https://huggingface.co/facebook/seamless-m4t-unity-small) - [checkpoint](https://huggingface.co/facebook/seamless-m4t-unity-small/resolve/main/unity_on_device.ptl) | 281M | 862MB | S2ST, S2TT, ASR |eng, fra, hin, por, spa| | UnitY-Small-S2T |[🤗 Model card](https://huggingface.co/facebook/seamless-m4t-unity-small-s2t) - [checkpoint](https://huggingface.co/facebook/seamless-m4t-unity-small-s2t/resolve/main/unity_on_device_s2t.ptl) | 235M | 637MB | S2TT, ASR |eng, fra,hin, por, spa| UnitY-Small-S2T is a pruned version of UnitY-Small without 2nd pass unit decoding. ## Inference To use exported model, users don't need seamless_communication or fairseq2 dependency. ```python import torchaudio import torch audio_input, _ = torchaudio.load(TEST_AUDIO_PATH) # Load waveform using torchaudio s2t_model = torch.jit.load("unity_on_device_s2t.ptl") # Load exported S2T model with torch.no_grad(): text = s2t_model(audio_input, tgt_lang=TGT_LANG) # Forward call with tgt_lang specified for ASR or S2TT print(text) # Show text output ``` Also running the exported model doesn't need python runtime. For example, you could load this model in C++ following [this tutorial](https://pytorch.org/tutorials/advanced/cpp_export.html), or building your own on-device applications similar to [this example](https://github.com/pytorch/ios-demo-app/tree/master/SpeechRecognition) # Citation If you use SeamlessM4T in your work or any models/datasets/artifacts published in SeamlessM4T, please cite: ```bibtex @article{seamlessm4t2023, title={SeamlessM4T—Massively Multilingual \& Multimodal Machine Translation}, author={{Seamless Communication}, Lo\"{i}c Barrault, Yu-An Chung, Mariano Cora Meglioli, David Dale, Ning Dong, Paul-Ambroise Duquenne, Hady Elsahar, Hongyu Gong, Kevin Heffernan, John Hoffman, Christopher Klaiber, Pengwei Li, Daniel Licht, Jean Maillard, Alice Rakotoarison, Kaushik Ram Sadagopan, Guillaume Wenzek, Ethan Ye, Bapi Akula, Peng-Jen Chen, Naji El Hachem, Brian Ellis, Gabriel Mejia Gonzalez, Justin Haaheim, Prangthip Hansanti, Russ Howes, Bernie Huang, Min-Jae Hwang, Hirofumi Inaguma, Somya Jain, Elahe Kalbassi, Amanda Kallet, Ilia Kulikov, Janice Lam, Daniel Li, Xutai Ma, Ruslan Mavlyutov, Benjamin Peloquin, Mohamed Ramadan, Abinesh Ramakrishnan, Anna Sun, Kevin Tran, Tuan Tran, Igor Tufanov, Vish Vogeti, Carleigh Wood, Yilin Yang, Bokai Yu, Pierre Andrews, Can Balioglu, Marta R. Costa-juss\`{a} \footnotemark[3], Onur \,{C}elebi,Maha Elbayad,Cynthia Gao, Francisco Guzm\'an, Justine Kao, Ann Lee, Alexandre Mourachko, Juan Pino, Sravya Popuri, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, Paden Tomasello, Changhan Wang, Jeff Wang, Skyler Wang}, journal={ArXiv}, year={2023} } ``` # License seamless_communication is CC-BY-NC 4.0 licensed
null
Non_BioNLP
# SeamlessM4T - On-Device SeamlessM4T is designed to provide high quality translation, allowing people from different linguistic communities to communicate effortlessly through speech and text. SeamlessM4T covers: - 📥 101 languages for speech input - ⌨️ 96 Languages for text input/output - 🗣️ 35 languages for speech output. Apart from [SeamlessM4T-LARGE (2.3B)](https://huggingface.co/facebook/seamless-m4t-large) and [SeamlessM4T-MEDIUM (1.2B)](https://huggingface.co/facebook/seamless-m4t-medium) models, we are also developing a small model (281M) targeting for on-device inference. This README contains an example to run an exported small model covering most tasks (ASR/S2TT/S2ST). The model could be executed on popular mobile devices with Pytorch Mobile (https://pytorch.org/mobile/home/). ## Overview | Model | Checkpoint | Num Params | Disk Size | Supported Tasks | Supported Languages| |---------|------------|----------|-------------|------------|-------------------------| | UnitY-Small|[🤗 Model card](https://huggingface.co/facebook/seamless-m4t-unity-small) - [checkpoint](https://huggingface.co/facebook/seamless-m4t-unity-small/resolve/main/unity_on_device.ptl) | 281M | 862MB | S2ST, S2TT, ASR |eng, fra, hin, por, spa| | UnitY-Small-S2T |[🤗 Model card](https://huggingface.co/facebook/seamless-m4t-unity-small-s2t) - [checkpoint](https://huggingface.co/facebook/seamless-m4t-unity-small-s2t/resolve/main/unity_on_device_s2t.ptl) | 235M | 637MB | S2TT, ASR |eng, fra,hin, por, spa| UnitY-Small-S2T is a pruned version of UnitY-Small without 2nd pass unit decoding. ## Inference To use exported model, users don't need seamless_communication or fairseq2 dependency. ```python import torchaudio import torch audio_input, _ = torchaudio.load(TEST_AUDIO_PATH) # Load waveform using torchaudio s2t_model = torch.jit.load("unity_on_device_s2t.ptl") # Load exported S2T model with torch.no_grad(): text = s2t_model(audio_input, tgt_lang=TGT_LANG) # Forward call with tgt_lang specified for ASR or S2TT print(text) # Show text output ``` Also running the exported model doesn't need python runtime. For example, you could load this model in C++ following [this tutorial](https://pytorch.org/tutorials/advanced/cpp_export.html), or building your own on-device applications similar to [this example](https://github.com/pytorch/ios-demo-app/tree/master/SpeechRecognition) # Citation If you use SeamlessM4T in your work or any models/datasets/artifacts published in SeamlessM4T, please cite: ```bibtex @article{seamlessm4t2023, title={SeamlessM4T—Massively Multilingual \& Multimodal Machine Translation}, author={{Seamless Communication}, Lo\"{i}c Barrault, Yu-An Chung, Mariano Cora Meglioli, David Dale, Ning Dong, Paul-Ambroise Duquenne, Hady Elsahar, Hongyu Gong, Kevin Heffernan, John Hoffman, Christopher Klaiber, Pengwei Li, Daniel Licht, Jean Maillard, Alice Rakotoarison, Kaushik Ram Sadagopan, Guillaume Wenzek, Ethan Ye, Bapi Akula, Peng-Jen Chen, Naji El Hachem, Brian Ellis, Gabriel Mejia Gonzalez, Justin Haaheim, Prangthip Hansanti, Russ Howes, Bernie Huang, Min-Jae Hwang, Hirofumi Inaguma, Somya Jain, Elahe Kalbassi, Amanda Kallet, Ilia Kulikov, Janice Lam, Daniel Li, Xutai Ma, Ruslan Mavlyutov, Benjamin Peloquin, Mohamed Ramadan, Abinesh Ramakrishnan, Anna Sun, Kevin Tran, Tuan Tran, Igor Tufanov, Vish Vogeti, Carleigh Wood, Yilin Yang, Bokai Yu, Pierre Andrews, Can Balioglu, Marta R. Costa-juss\`{a} \footnotemark[3], Onur \,{C}elebi,Maha Elbayad,Cynthia Gao, Francisco Guzm\'an, Justine Kao, Ann Lee, Alexandre Mourachko, Juan Pino, Sravya Popuri, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, Paden Tomasello, Changhan Wang, Jeff Wang, Skyler Wang}, journal={ArXiv}, year={2023} } ``` # License seamless_communication is CC-BY-NC 4.0 licensed
{"library_name": "fairseq2", "license": "cc-by-nc-4.0", "tags": ["SeamlessM4T"], "inference": false}
task
[ "TRANSLATION" ]
46,014
gaudi/opus-mt-bg-fi-ctranslate2
gaudi
translation
[ "transformers", "marian", "ctranslate2", "translation", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2024-07-17T15:19:49Z
2024-10-18T23:18:36+00:00
6
0
--- license: apache-2.0 tags: - ctranslate2 - translation --- # Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-bg-fi) - This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2). - This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil). # What is CTranslate2? [CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models. CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include: - Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper - Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon - Encoder-only models: BERT, DistilBERT, XLM-RoBERTa The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration. # CTranslate2 Benchmarks Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset. The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers. Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. ## CPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 | | Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 | | Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 | | CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 | | CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 | ## GPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 | | Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 | | CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 | | CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 | `Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.` **Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br /> **Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-bg-fi).** ## Internal Benchmarks Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality. # CTranslate2 Installation ```bash pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0 ``` ### ct2-transformers-converter Command Used: ```bash ct2-transformers-converter --model Helsinki-NLP/opus-mt-bg-fi --output_dir ./ctranslate2/opus-mt-bg-fi-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16 ``` # CTranslate2 Converted Checkpoint Information: **Compatible With:** - [ctranslate2](https://github.com/OpenNMT/CTranslate2) - [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2) **Compute Type:** - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` # Sample Code - ctranslate2 #### Clone the repository to the working directory or wherever you wish to store the model artifacts. #### ```bash git clone https://huggingface.co/gaudi/opus-mt-bg-fi-ctranslate2 ``` #### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. #### ```python from ctranslate2 import Translator import transformers model_dir = "./opus-mt-bg-fi-ctranslate2" # Path to model directory. translator = Translator( model_path=model_dir, device="cuda", # cpu, cuda, or auto. inter_threads=1, # Maximum number of parallel translations. intra_threads=4, # Number of OpenMP threads per translator. compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda. ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir) source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX.")) results = translator.translate_batch([source]) target = results[0].hypotheses[0] print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target))) ``` # Sample Code - hf-hub-ctranslate2 **Derived From [michaelfeil](https://huggingface.co/michaelfeil):** ```python from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub from transformers import AutoTokenizer model_name = "gaudi/opus-mt-bg-fi-ctranslate2" model = TranslatorCT2fromHfHub( model_name_or_path=model_name, device="cuda", compute_type="int8_float16", tokenizer=AutoTokenizer.from_pretrained(model_name) ) outputs = model.generate( text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"], ) print(outputs) ``` # License and other remarks: License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-bg-fi) by Helsinki-NLP.
null
Non_BioNLP
# Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-bg-fi) - This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2). - This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil). # What is CTranslate2? [CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models. CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include: - Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper - Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon - Encoder-only models: BERT, DistilBERT, XLM-RoBERTa The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration. # CTranslate2 Benchmarks Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset. The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers. Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. ## CPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 | | Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 | | Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 | | CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 | | CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 | ## GPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 | | Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 | | CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 | | CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 | `Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.` **Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br /> **Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-bg-fi).** ## Internal Benchmarks Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality. # CTranslate2 Installation ```bash pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0 ``` ### ct2-transformers-converter Command Used: ```bash ct2-transformers-converter --model Helsinki-NLP/opus-mt-bg-fi --output_dir ./ctranslate2/opus-mt-bg-fi-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16 ``` # CTranslate2 Converted Checkpoint Information: **Compatible With:** - [ctranslate2](https://github.com/OpenNMT/CTranslate2) - [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2) **Compute Type:** - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` # Sample Code - ctranslate2 #### Clone the repository to the working directory or wherever you wish to store the model artifacts. #### ```bash git clone https://huggingface.co/gaudi/opus-mt-bg-fi-ctranslate2 ``` #### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. #### ```python from ctranslate2 import Translator import transformers model_dir = "./opus-mt-bg-fi-ctranslate2" # Path to model directory. translator = Translator( model_path=model_dir, device="cuda", # cpu, cuda, or auto. inter_threads=1, # Maximum number of parallel translations. intra_threads=4, # Number of OpenMP threads per translator. compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda. ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir) source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX.")) results = translator.translate_batch([source]) target = results[0].hypotheses[0] print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target))) ``` # Sample Code - hf-hub-ctranslate2 **Derived From [michaelfeil](https://huggingface.co/michaelfeil):** ```python from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub from transformers import AutoTokenizer model_name = "gaudi/opus-mt-bg-fi-ctranslate2" model = TranslatorCT2fromHfHub( model_name_or_path=model_name, device="cuda", compute_type="int8_float16", tokenizer=AutoTokenizer.from_pretrained(model_name) ) outputs = model.generate( text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"], ) print(outputs) ``` # License and other remarks: License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-bg-fi) by Helsinki-NLP.
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
task
[ "TRANSLATION" ]
46,015
PraveenJesu/openai-whisper-small-zoomrx-v2
PraveenJesu
automatic-speech-recognition
[ "transformers", "pytorch", "jax", "whisper", "automatic-speech-recognition", "audio", "hf-asr-leaderboard", "en", "zh", "de", "es", "ru", "ko", "fr", "ja", "pt", "tr", "pl", "ca", "nl", "ar", "sv", "it", "id", "hi", "fi", "vi", "he", "uk", "el", "ms", "cs", "ro", "da", "hu", "ta", "no", "th", "ur", "hr", "bg", "lt", "la", "mi", "ml", "cy", "sk", "te", "fa", "lv", "bn", "sr", "az", "sl", "kn", "et", "mk", "br", "eu", "is", "hy", "ne", "mn", "bs", "kk", "sq", "sw", "gl", "mr", "pa", "si", "km", "sn", "yo", "so", "af", "oc", "ka", "be", "tg", "sd", "gu", "am", "yi", "lo", "uz", "fo", "ht", "ps", "tk", "nn", "mt", "sa", "lb", "my", "bo", "tl", "mg", "as", "tt", "haw", "ln", "ha", "ba", "jw", "su", "arxiv:2212.04356", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
2023-06-19T11:14:28Z
2023-06-19T11:19:03+00:00
15
0
--- language: - en - zh - de - es - ru - ko - fr - ja - pt - tr - pl - ca - nl - ar - sv - it - id - hi - fi - vi - he - uk - el - ms - cs - ro - da - hu - ta - false - th - ur - hr - bg - lt - la - mi - ml - cy - sk - te - fa - lv - bn - sr - az - sl - kn - et - mk - br - eu - is - hy - ne - mn - bs - kk - sq - sw - gl - mr - pa - si - km - sn - yo - so - af - oc - ka - be - tg - sd - gu - am - yi - lo - uz - fo - ht - ps - tk - nn - mt - sa - lb - my - bo - tl - mg - as - tt - haw - ln - ha - ba - jw - su license: apache-2.0 pipeline_tag: automatic-speech-recognition tags: - audio - automatic-speech-recognition - hf-asr-leaderboard widget: - example_title: Librispeech sample 1 src: https://cdn-media.huggingface.co/speech_samples/sample1.flac - example_title: Librispeech sample 2 src: https://cdn-media.huggingface.co/speech_samples/sample2.flac model-index: - name: whisper-small results: - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: LibriSpeech (clean) type: librispeech_asr config: clean split: test args: language: en metrics: - type: wer value: 3.432213777886737 name: Test WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: LibriSpeech (other) type: librispeech_asr config: other split: test args: language: en metrics: - type: wer value: 7.628304527060248 name: Test WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: Common Voice 11.0 type: mozilla-foundation/common_voice_11_0 config: hi split: test args: language: hi metrics: - type: wer value: 87.3 name: Test WER --- # Whisper Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need for fine-tuning. Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356) by Alec Radford et al from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper). **Disclaimer**: Content for this model card has partly been written by the Hugging Face team, and parts of it were copied and pasted from the original model card. ## Model details Whisper is a Transformer based encoder-decoder model, also referred to as a _sequence-to-sequence_ model. It was trained on 680k hours of labelled speech data annotated using large-scale weak supervision. The models were trained on either English-only data or multilingual data. The English-only models were trained on the task of speech recognition. The multilingual models were trained on both speech recognition and speech translation. For speech recognition, the model predicts transcriptions in the *same* language as the audio. For speech translation, the model predicts transcriptions to a *different* language to the audio. Whisper checkpoints come in five configurations of varying model sizes. The smallest four are trained on either English-only or multilingual data. The largest checkpoints are multilingual only. All ten of the pre-trained checkpoints are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The checkpoints are summarised in the following table with links to the models on the Hub: | Size | Parameters | English-only | Multilingual | |----------|------------|------------------------------------------------------|-----------------------------------------------------| | tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) | | base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) | | small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) | | medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) | | large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) | | large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) | # Usage To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor). The `WhisperProcessor` is used to: 1. Pre-process the audio inputs (converting them to log-Mel spectrograms for the model) 2. Post-process the model outputs (converting them from tokens to text) The model is informed of which task to perform (transcription or translation) by passing the appropriate "context tokens". These context tokens are a sequence of tokens that are given to the decoder at the start of the decoding process, and take the following order: 1. The transcription always starts with the `<|startoftranscript|>` token 2. The second token is the language token (e.g. `<|en|>` for English) 3. The third token is the "task token". It can take one of two values: `<|transcribe|>` for speech recognition or `<|translate|>` for speech translation 4. In addition, a `<|notimestamps|>` token is added if the model should not include timestamp prediction Thus, a typical sequence of context tokens might look as follows: ``` <|startoftranscript|> <|en|> <|transcribe|> <|notimestamps|> ``` Which tells the model to decode in English, under the task of speech recognition, and not to predict timestamps. These tokens can either be forced or un-forced. If they are forced, the model is made to predict each token at each position. This allows one to control the output language and task for the Whisper model. If they are un-forced, the Whisper model will automatically predict the output langauge and task itself. The context tokens can be set accordingly: ```python model.config.forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language="english", task="transcribe") ``` Which forces the model to predict in English under the task of speech recognition. ## Transcription ### English to English In this example, the context tokens are 'unforced', meaning the model automatically predicts the output language (English) and task (transcribe). ```python >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration >>> from datasets import load_dataset >>> # load model and processor >>> processor = WhisperProcessor.from_pretrained("openai/whisper-small") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small") >>> model.config.forced_decoder_ids = None >>> # load dummy dataset and read audio files >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> sample = ds[0]["audio"] >>> input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features >>> # generate token ids >>> predicted_ids = model.generate(input_features) >>> # decode token ids to text >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False) ['<|startoftranscript|><|en|><|transcribe|><|notimestamps|> Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.<|endoftext|>'] >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) [' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.'] ``` The context tokens can be removed from the start of the transcription by setting `skip_special_tokens=True`. ### French to French The following example demonstrates French to French transcription by setting the decoder ids appropriately. ```python >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration >>> from datasets import Audio, load_dataset >>> # load model and processor >>> processor = WhisperProcessor.from_pretrained("openai/whisper-small") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small") >>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="transcribe") >>> # load streaming dataset and read first audio sample >>> ds = load_dataset("common_voice", "fr", split="test", streaming=True) >>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000)) >>> input_speech = next(iter(ds))["audio"] >>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features >>> # generate token ids >>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids) >>> # decode token ids to text >>> transcription = processor.batch_decode(predicted_ids) ['<|startoftranscript|><|fr|><|transcribe|><|notimestamps|> Un vrai travail intéressant va enfin être mené sur ce sujet.<|endoftext|>'] >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) [' Un vrai travail intéressant va enfin être mené sur ce sujet.'] ``` ## Translation Setting the task to "translate" forces the Whisper model to perform speech translation. ### French to English ```python >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration >>> from datasets import Audio, load_dataset >>> # load model and processor >>> processor = WhisperProcessor.from_pretrained("openai/whisper-small") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small") >>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="translate") >>> # load streaming dataset and read first audio sample >>> ds = load_dataset("common_voice", "fr", split="test", streaming=True) >>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000)) >>> input_speech = next(iter(ds))["audio"] >>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features >>> # generate token ids >>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids) >>> # decode token ids to text >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) [' A very interesting work, we will finally be given on this subject.'] ``` ## Evaluation This code snippet shows how to evaluate Whisper Small on [LibriSpeech test-clean](https://huggingface.co/datasets/librispeech_asr): ```python >>> from datasets import load_dataset >>> from transformers import WhisperForConditionalGeneration, WhisperProcessor >>> import torch >>> from evaluate import load >>> librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test") >>> processor = WhisperProcessor.from_pretrained("openai/whisper-small") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small").to("cuda") >>> def map_to_pred(batch): >>> audio = batch["audio"] >>> input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features >>> batch["reference"] = processor.tokenizer._normalize(batch['text']) >>> >>> with torch.no_grad(): >>> predicted_ids = model.generate(input_features.to("cuda"))[0] >>> transcription = processor.decode(predicted_ids) >>> batch["prediction"] = processor.tokenizer._normalize(transcription) >>> return batch >>> result = librispeech_test_clean.map(map_to_pred) >>> wer = load("wer") >>> print(100 * wer.compute(references=result["reference"], predictions=result["prediction"])) 3.432213777886737 ``` ## Long-Form Transcription The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline) method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. With chunking enabled, the pipeline can be run with batched inference. It can also be extended to predict sequence level timestamps by passing `return_timestamps=True`: ```python >>> import torch >>> from transformers import pipeline >>> from datasets import load_dataset >>> device = "cuda:0" if torch.cuda.is_available() else "cpu" >>> pipe = pipeline( >>> "automatic-speech-recognition", >>> model="openai/whisper-small", >>> chunk_length_s=30, >>> device=device, >>> ) >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> sample = ds[0]["audio"] >>> prediction = pipe(sample.copy(), batch_size=8)["text"] " Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel." >>> # we can also return timestamps for the predictions >>> prediction = pipe(sample.copy(), batch_size=8, return_timestamps=True)["chunks"] [{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.', 'timestamp': (0.0, 5.44)}] ``` Refer to the blog post [ASR Chunking](https://huggingface.co/blog/asr-chunking) for more details on the chunking algorithm. ## Fine-Tuning The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However, its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog post [Fine-Tune Whisper with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper) provides a step-by-step guide to fine-tuning the Whisper model with as little as 5 hours of labelled data. ### Evaluated Use The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research. The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them. In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes. ## Training Data The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages. As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language. ## Performance and Limitations Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level. However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself. Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf). In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages. ## Broader Implications We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications. There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects. ### BibTeX entry and citation info ```bibtex @misc{radford2022whisper, doi = {10.48550/ARXIV.2212.04356}, url = {https://arxiv.org/abs/2212.04356}, author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya}, title = {Robust Speech Recognition via Large-Scale Weak Supervision}, publisher = {arXiv}, year = {2022}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
null
Non_BioNLP
# Whisper Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need for fine-tuning. Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356) by Alec Radford et al from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper). **Disclaimer**: Content for this model card has partly been written by the Hugging Face team, and parts of it were copied and pasted from the original model card. ## Model details Whisper is a Transformer based encoder-decoder model, also referred to as a _sequence-to-sequence_ model. It was trained on 680k hours of labelled speech data annotated using large-scale weak supervision. The models were trained on either English-only data or multilingual data. The English-only models were trained on the task of speech recognition. The multilingual models were trained on both speech recognition and speech translation. For speech recognition, the model predicts transcriptions in the *same* language as the audio. For speech translation, the model predicts transcriptions to a *different* language to the audio. Whisper checkpoints come in five configurations of varying model sizes. The smallest four are trained on either English-only or multilingual data. The largest checkpoints are multilingual only. All ten of the pre-trained checkpoints are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The checkpoints are summarised in the following table with links to the models on the Hub: | Size | Parameters | English-only | Multilingual | |----------|------------|------------------------------------------------------|-----------------------------------------------------| | tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) | | base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) | | small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) | | medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) | | large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) | | large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) | # Usage To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor). The `WhisperProcessor` is used to: 1. Pre-process the audio inputs (converting them to log-Mel spectrograms for the model) 2. Post-process the model outputs (converting them from tokens to text) The model is informed of which task to perform (transcription or translation) by passing the appropriate "context tokens". These context tokens are a sequence of tokens that are given to the decoder at the start of the decoding process, and take the following order: 1. The transcription always starts with the `<|startoftranscript|>` token 2. The second token is the language token (e.g. `<|en|>` for English) 3. The third token is the "task token". It can take one of two values: `<|transcribe|>` for speech recognition or `<|translate|>` for speech translation 4. In addition, a `<|notimestamps|>` token is added if the model should not include timestamp prediction Thus, a typical sequence of context tokens might look as follows: ``` <|startoftranscript|> <|en|> <|transcribe|> <|notimestamps|> ``` Which tells the model to decode in English, under the task of speech recognition, and not to predict timestamps. These tokens can either be forced or un-forced. If they are forced, the model is made to predict each token at each position. This allows one to control the output language and task for the Whisper model. If they are un-forced, the Whisper model will automatically predict the output langauge and task itself. The context tokens can be set accordingly: ```python model.config.forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language="english", task="transcribe") ``` Which forces the model to predict in English under the task of speech recognition. ## Transcription ### English to English In this example, the context tokens are 'unforced', meaning the model automatically predicts the output language (English) and task (transcribe). ```python >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration >>> from datasets import load_dataset >>> # load model and processor >>> processor = WhisperProcessor.from_pretrained("openai/whisper-small") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small") >>> model.config.forced_decoder_ids = None >>> # load dummy dataset and read audio files >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> sample = ds[0]["audio"] >>> input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features >>> # generate token ids >>> predicted_ids = model.generate(input_features) >>> # decode token ids to text >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False) ['<|startoftranscript|><|en|><|transcribe|><|notimestamps|> Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.<|endoftext|>'] >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) [' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.'] ``` The context tokens can be removed from the start of the transcription by setting `skip_special_tokens=True`. ### French to French The following example demonstrates French to French transcription by setting the decoder ids appropriately. ```python >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration >>> from datasets import Audio, load_dataset >>> # load model and processor >>> processor = WhisperProcessor.from_pretrained("openai/whisper-small") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small") >>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="transcribe") >>> # load streaming dataset and read first audio sample >>> ds = load_dataset("common_voice", "fr", split="test", streaming=True) >>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000)) >>> input_speech = next(iter(ds))["audio"] >>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features >>> # generate token ids >>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids) >>> # decode token ids to text >>> transcription = processor.batch_decode(predicted_ids) ['<|startoftranscript|><|fr|><|transcribe|><|notimestamps|> Un vrai travail intéressant va enfin être mené sur ce sujet.<|endoftext|>'] >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) [' Un vrai travail intéressant va enfin être mené sur ce sujet.'] ``` ## Translation Setting the task to "translate" forces the Whisper model to perform speech translation. ### French to English ```python >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration >>> from datasets import Audio, load_dataset >>> # load model and processor >>> processor = WhisperProcessor.from_pretrained("openai/whisper-small") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small") >>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="translate") >>> # load streaming dataset and read first audio sample >>> ds = load_dataset("common_voice", "fr", split="test", streaming=True) >>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000)) >>> input_speech = next(iter(ds))["audio"] >>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features >>> # generate token ids >>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids) >>> # decode token ids to text >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) [' A very interesting work, we will finally be given on this subject.'] ``` ## Evaluation This code snippet shows how to evaluate Whisper Small on [LibriSpeech test-clean](https://huggingface.co/datasets/librispeech_asr): ```python >>> from datasets import load_dataset >>> from transformers import WhisperForConditionalGeneration, WhisperProcessor >>> import torch >>> from evaluate import load >>> librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test") >>> processor = WhisperProcessor.from_pretrained("openai/whisper-small") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small").to("cuda") >>> def map_to_pred(batch): >>> audio = batch["audio"] >>> input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features >>> batch["reference"] = processor.tokenizer._normalize(batch['text']) >>> >>> with torch.no_grad(): >>> predicted_ids = model.generate(input_features.to("cuda"))[0] >>> transcription = processor.decode(predicted_ids) >>> batch["prediction"] = processor.tokenizer._normalize(transcription) >>> return batch >>> result = librispeech_test_clean.map(map_to_pred) >>> wer = load("wer") >>> print(100 * wer.compute(references=result["reference"], predictions=result["prediction"])) 3.432213777886737 ``` ## Long-Form Transcription The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline) method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. With chunking enabled, the pipeline can be run with batched inference. It can also be extended to predict sequence level timestamps by passing `return_timestamps=True`: ```python >>> import torch >>> from transformers import pipeline >>> from datasets import load_dataset >>> device = "cuda:0" if torch.cuda.is_available() else "cpu" >>> pipe = pipeline( >>> "automatic-speech-recognition", >>> model="openai/whisper-small", >>> chunk_length_s=30, >>> device=device, >>> ) >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> sample = ds[0]["audio"] >>> prediction = pipe(sample.copy(), batch_size=8)["text"] " Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel." >>> # we can also return timestamps for the predictions >>> prediction = pipe(sample.copy(), batch_size=8, return_timestamps=True)["chunks"] [{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.', 'timestamp': (0.0, 5.44)}] ``` Refer to the blog post [ASR Chunking](https://huggingface.co/blog/asr-chunking) for more details on the chunking algorithm. ## Fine-Tuning The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However, its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog post [Fine-Tune Whisper with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper) provides a step-by-step guide to fine-tuning the Whisper model with as little as 5 hours of labelled data. ### Evaluated Use The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research. The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them. In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes. ## Training Data The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages. As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language. ## Performance and Limitations Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level. However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself. Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf). In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages. ## Broader Implications We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications. There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects. ### BibTeX entry and citation info ```bibtex @misc{radford2022whisper, doi = {10.48550/ARXIV.2212.04356}, url = {https://arxiv.org/abs/2212.04356}, author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya}, title = {Robust Speech Recognition via Large-Scale Weak Supervision}, publisher = {arXiv}, year = {2022}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
{"language": ["en", "zh", "de", "es", "ru", "ko", "fr", "ja", "pt", "tr", "pl", "ca", "nl", "ar", "sv", "it", "id", "hi", "fi", "vi", "he", "uk", "el", "ms", "cs", "ro", "da", "hu", "ta", false, "th", "ur", "hr", "bg", "lt", "la", "mi", "ml", "cy", "sk", "te", "fa", "lv", "bn", "sr", "az", "sl", "kn", "et", "mk", "br", "eu", "is", "hy", "ne", "mn", "bs", "kk", "sq", "sw", "gl", "mr", "pa", "si", "km", "sn", "yo", "so", "af", "oc", "ka", "be", "tg", "sd", "gu", "am", "yi", "lo", "uz", "fo", "ht", "ps", "tk", "nn", "mt", "sa", "lb", "my", "bo", "tl", "mg", "as", "tt", "haw", "ln", "ha", "ba", "jw", "su"], "license": "apache-2.0", "pipeline_tag": "automatic-speech-recognition", "tags": ["audio", "automatic-speech-recognition", "hf-asr-leaderboard"], "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}, {"example_title": "Librispeech sample 2", "src": "https://cdn-media.huggingface.co/speech_samples/sample2.flac"}], "model-index": [{"name": "whisper-small", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "LibriSpeech (clean)", "type": "librispeech_asr", "config": "clean", "split": "test", "args": {"language": "en"}}, "metrics": [{"type": "wer", "value": 3.432213777886737, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "LibriSpeech (other)", "type": "librispeech_asr", "config": "other", "split": "test", "args": {"language": "en"}}, "metrics": [{"type": "wer", "value": 7.628304527060248, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 11.0", "type": "mozilla-foundation/common_voice_11_0", "config": "hi", "split": "test", "args": {"language": "hi"}}, "metrics": [{"type": "wer", "value": 87.3, "name": "Test WER"}]}]}]}
task
[ "TRANSLATION" ]
46,017
PhishingGallery/sentence-transformers-all-MiniLM-L12-v2
PhishingGallery
sentence-similarity
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "en", "dataset:s2orc", "dataset:flax-sentence-embeddings/stackexchange_xml", "dataset:ms_marco", "dataset:gooaq", "dataset:yahoo_answers_topics", "dataset:code_search_net", "dataset:search_qa", "dataset:eli5", "dataset:snli", "dataset:multi_nli", "dataset:wikihow", "dataset:natural_questions", "dataset:trivia_qa", "dataset:embedding-data/sentence-compression", "dataset:embedding-data/flickr30k-captions", "dataset:embedding-data/altlex", "dataset:embedding-data/simple-wiki", "dataset:embedding-data/QQP", "dataset:embedding-data/SPECTER", "dataset:embedding-data/PAQ_pairs", "dataset:embedding-data/WikiAnswers", "arxiv:1904.06472", "arxiv:2102.07033", "arxiv:2104.08727", "arxiv:1704.05179", "arxiv:1810.09305", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-10-31T03:35:24Z
2024-11-23T05:35:05+00:00
0
0
--- datasets: - s2orc - flax-sentence-embeddings/stackexchange_xml - ms_marco - gooaq - yahoo_answers_topics - code_search_net - search_qa - eli5 - snli - multi_nli - wikihow - natural_questions - trivia_qa - embedding-data/sentence-compression - embedding-data/flickr30k-captions - embedding-data/altlex - embedding-data/simple-wiki - embedding-data/QQP - embedding-data/SPECTER - embedding-data/PAQ_pairs - embedding-data/WikiAnswers language: en library_name: sentence-transformers license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # all-MiniLM-L12-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-MiniLM-L12-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L12-v2') model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L12-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L12-v2) ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`microsoft/MiniLM-L12-H384-uncased`](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 256 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`microsoft/MiniLM-L12-H384-uncased`](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,170,060,424** |
null
Non_BioNLP
# all-MiniLM-L12-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-MiniLM-L12-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L12-v2') model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L12-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L12-v2) ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`microsoft/MiniLM-L12-H384-uncased`](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 256 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`microsoft/MiniLM-L12-H384-uncased`](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,170,060,424** |
{"datasets": ["s2orc", "flax-sentence-embeddings/stackexchange_xml", "ms_marco", "gooaq", "yahoo_answers_topics", "code_search_net", "search_qa", "eli5", "snli", "multi_nli", "wikihow", "natural_questions", "trivia_qa", "embedding-data/sentence-compression", "embedding-data/flickr30k-captions", "embedding-data/altlex", "embedding-data/simple-wiki", "embedding-data/QQP", "embedding-data/SPECTER", "embedding-data/PAQ_pairs", "embedding-data/WikiAnswers"], "language": "en", "library_name": "sentence-transformers", "license": "apache-2.0", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"]}
task
[ "QUESTION_ANSWERING" ]
46,018
fakezeta/gemma-2-9b-it-ov-int4
fakezeta
text-generation
[ "transformers", "openvino", "gemma2", "text-generation", "conversational", "arxiv:2009.03300", "arxiv:1905.07830", "arxiv:1911.11641", "arxiv:1904.09728", "arxiv:1905.10044", "arxiv:1907.10641", "arxiv:1811.00937", "arxiv:1809.02789", "arxiv:1911.01547", "arxiv:1705.03551", "arxiv:2107.03374", "arxiv:2108.07732", "arxiv:2110.14168", "arxiv:2009.11462", "arxiv:2101.11718", "arxiv:2110.08193", "arxiv:1804.09301", "arxiv:2109.07958", "arxiv:1804.06876", "arxiv:2103.03874", "arxiv:2304.06364", "arxiv:2206.04615", "arxiv:2203.09509", "base_model:google/gemma-2-9b", "base_model:finetune:google/gemma-2-9b", "license:gemma", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-09-15T23:28:14Z
2024-09-15T23:36:24+00:00
5
0
--- base_model: google/gemma-2-9b library_name: transformers license: gemma pipeline_tag: text-generation tags: - conversational --- # OpenVINO IR model with int4 quantization Model definition for LocalAI: ``` name: gemma-2-9b-it backend: transformers parameters: model: fakezeta/gemma-2-9b-it-ov-int4 context_size: 8192 type: OVModelForCausalLM template: use_tokenizer_template: true ``` To run the model directly with LocalAI: ``` local-ai run huggingface://fakezeta/gemma-2-9b-it-ov-int4/model.yaml ``` # Gemma 2 model card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs) **Resources and Technical Documentation**: * [Responsible Generative AI Toolkit][rai-toolkit] * [Gemma on Kaggle][kaggle-gemma] * [Gemma on Vertex Model Garden][vertex-mg-gemma] **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-9b-it) **Authors**: Google ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights for both pre-trained variants and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Usage Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with: ```sh pip install -U transformers ``` Then, copy the snippet from the section that is relevant for your usecase. #### Running with the `pipeline` API ```python import torch from transformers import pipeline pipe = pipeline( "text-generation", model="google/gemma-2-9b-it", model_kwargs={"torch_dtype": torch.bfloat16}, device="cuda", # replace with "mps" to run on a Mac device ) messages = [ {"role": "user", "content": "Who are you? Please, answer in pirate-speak."}, ] outputs = pipe(messages, max_new_tokens=256) assistant_response = outputs[0]["generated_text"][-1]["content"].strip() print(assistant_response) # Ahoy, matey! I be Gemma, a digital scallywag, a language-slingin' parrot of the digital seas. I be here to help ye with yer wordy woes, answer yer questions, and spin ye yarns of the digital world. So, what be yer pleasure, eh? 🦜 ``` #### Running the model on a single / multi GPU ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-9b-it", device_map="auto", torch_dtype=torch.bfloat16, ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=32) print(tokenizer.decode(outputs[0])) ``` You can ensure the correct chat template is applied by using `tokenizer.apply_chat_template` as follows: ```python messages = [ {"role": "user", "content": "Write me a poem about Machine Learning."}, ] input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda") outputs = model.generate(**input_ids, max_new_tokens=256) print(tokenizer.decode(outputs[0])) ``` <a name="precisions"></a> #### Running the model on a GPU using different precisions The native weights of this model were exported in `bfloat16` precision. You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below. * _Upcasting to `torch.float32`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-9b-it", device_map="auto", ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=32) print(tokenizer.decode(outputs[0])) ``` #### Running the model through a CLI The [local-gemma](https://github.com/huggingface/local-gemma) repository contains a lightweight wrapper around Transformers for running Gemma 2 through a command line interface, or CLI. Follow the [installation instructions](https://github.com/huggingface/local-gemma#cli-usage) for getting started, then launch the CLI through the following command: ```shell local-gemma --model 9b --preset speed ``` #### Quantized Versions through `bitsandbytes` <details> <summary> Using 8-bit precision (int8) </summary> ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-9b-it", quantization_config=quantization_config, ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=32) print(tokenizer.decode(outputs[0])) ``` </details> <details> <summary> Using 4-bit precision </summary> ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-9b-it", quantization_config=quantization_config, ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=32) print(tokenizer.decode(outputs[0])) ``` </details> #### Advanced Usage <details> <summary> Torch compile </summary> [Torch compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) is a method for speeding-up the inference of PyTorch modules. The Gemma-2 model can be run up to 6x faster by leveraging torch compile. Note that two warm-up steps are required before the full inference speed is realised: ```python import os os.environ["TOKENIZERS_PARALLELISM"] = "false" from transformers import AutoTokenizer, Gemma2ForCausalLM from transformers.cache_utils import HybridCache import torch torch.set_float32_matmul_precision("high") # load the model + tokenizer tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it") model = Gemma2ForCausalLM.from_pretrained("google/gemma-2-9b-it", torch_dtype=torch.bfloat16) model.to("cuda") # apply the torch compile transformation model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True) # pre-process inputs input_text = "The theory of special relativity states " model_inputs = tokenizer(input_text, return_tensors="pt").to("cuda") prompt_length = model_inputs.input_ids.shape[1] # set-up k/v cache past_key_values = HybridCache( config=model.config, max_batch_size=1, max_cache_len=model.config.max_position_embeddings, device=model.device, dtype=model.dtype ) # enable passing kv cache to generate model._supports_cache_class = True model.generation_config.cache_implementation = None # two warm-up steps for idx in range(2): outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128) past_key_values.reset() # fast run outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` For more details, refer to the [Transformers documentation](https://huggingface.co/docs/transformers/main/en/llm_optims?static-kv=basic+usage%3A+generation_config). </details> ### Chat Template The instruction-tuned models use a chat template that must be adhered to for conversational use. The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet. Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction: ```py from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model_id = "google/gemma-2-9b-it" dtype = torch.bfloat16 tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, device_map="cuda", torch_dtype=dtype,) chat = [ { "role": "user", "content": "Write a hello world program" }, ] prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) ``` At this point, the prompt contains the following text: ``` <bos><start_of_turn>user Write a hello world program<end_of_turn> <start_of_turn>model ``` As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity (either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with the `<end_of_turn>` token. You can follow this format to build the prompt manually, if you need to do it without the tokenizer's chat template. After the prompt is ready, generation can be performed like this: ```py inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150) print(tokenizer.decode(outputs[0])) ``` ### Inputs and outputs * **Input:** Text string, such as a question, a prompt, or a document to be summarized. * **Output:** Generated English-language text in response to the input, such as an answer to a question, or a summary of a document. ### Citation ```none @article{gemma_2024, title={Gemma}, url={https://www.kaggle.com/m/3301}, DOI={10.34740/KAGGLE/M/3301}, publisher={Kaggle}, author={Gemma Team}, year={2024} } ``` ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens and the 9B model was trained with 8 trillion tokens. Here are the key components: * Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. Primarily English-language content. * Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code or understand code-related questions. * Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. The combination of these diverse data sources is crucial for training a powerful language model that can handle a wide variety of different tasks and text formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content. * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. * Additional methods: Filtering based on content quality and safety in line with [our policies][safety-policies]. ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using the latest generation of [Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p). Training large language models requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: * Performance: TPUs are specifically designed to handle the massive computations involved in training LLMs. They can speed up training considerably compared to CPUs. * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. * These advantages are aligned with [Google's commitments to operate sustainably][sustainability]. ### Software Training was done using [JAX][jax] and [ML Pathways][ml-pathways]. JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for [foundation models][foundation-models], including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models][gemini-2-paper]; "the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow." ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: | Benchmark | Metric | Gemma PT 9B | Gemma PT 27B | | ------------------------------ | ------------- | ----------- | ------------ | | [MMLU][mmlu] | 5-shot, top-1 | 71.3 | 75.2 | | [HellaSwag][hellaswag] | 10-shot | 81.9 | 86.4 | | [PIQA][piqa] | 0-shot | 81.7 | 83.2 | | [SocialIQA][socialiqa] | 0-shot | 53.4 | 53.7 | | [BoolQ][boolq] | 0-shot | 84.2 | 84.8 | | [WinoGrande][winogrande] | partial score | 80.6 | 83.7 | | [ARC-e][arc] | 0-shot | 88.0 | 88.6 | | [ARC-c][arc] | 25-shot | 68.4 | 71.4 | | [TriviaQA][triviaqa] | 5-shot | 76.6 | 83.7 | | [Natural Questions][naturalq] | 5-shot | 29.2 | 34.5 | | [HumanEval][humaneval] | pass@1 | 40.2 | 51.8 | | [MBPP][mbpp] | 3-shot | 52.4 | 62.6 | | [GSM8K][gsm8k] | 5-shot, maj@1 | 68.6 | 74.0 | | [MATH][math] | 4-shot | 36.6 | 42.3 | | [AGIEval][agieval] | 3-5-shot | 52.8 | 55.1 | | [BIG-Bench][big-bench] | 3-shot, CoT | 68.2 | 74.9 | | ------------------------------ | ------------- | ----------- | ------------ | ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Text-to-Text Content Safety: Human evaluation on prompts covering safety policies including child sexual abuse and exploitation, harassment, violence and gore, and hate speech. * Text-to-Text Representational Harms: Benchmark against relevant academic datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq]. * Memorization: Automated evaluation of memorization of training data, including the risk of personally identifiable information exposure. * Large-scale harm: Tests for "dangerous capabilities," such as chemical, biological, radiological, and nuclear (CBRN) risks. ### Evaluation Results The results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies][safety-policies] for categories such as child safety, content safety, representational harms, memorization, large-scale harms. On top of robust internal evaluations, the results of well-known safety benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA are shown here. #### Gemma 2.0 | Benchmark | Metric | Gemma 2 IT 9B | Gemma 2 IT 27B | | ------------------------ | ------------- | --------------- | ---------------- | | [RealToxicity][realtox] | average | 8.25 | 8.84 | | [CrowS-Pairs][crows] | top-1 | 37.47 | 36.67 | | [BBQ Ambig][bbq] | 1-shot, top-1 | 88.58 | 85.99 | | [BBQ Disambig][bbq] | top-1 | 82.67 | 86.94 | | [Winogender][winogender] | top-1 | 79.17 | 77.22 | | [TruthfulQA][truthfulqa] | | 50.27 | 51.60 | | [Winobias 1_2][winobias] | | 78.09 | 81.94 | | [Winobias 2_2][winobias] | | 95.32 | 97.22 | | [Toxigen][toxigen] | | 39.30 | 38.42 | | ------------------------ | ------------- | --------------- | ---------------- | ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open Large Language Models (LLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. * Content Creation and Communication * Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. * Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. * Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. * Research and Education * Natural Language Processing (NLP) Research: These models can serve as a foundation for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field. * Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. * Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations * Training Data * The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. * The scope of the training dataset determines the subject areas the model can handle effectively. * Context and Task Complexity * LLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). * Language Ambiguity and Nuance * Natural language is inherently complex. LLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * Factual Accuracy * LLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * Common Sense * LLMs rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of large language models (LLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * LLMs trained on large-scale, real-world text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * LLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit][rai-toolkit]. * Transparency and Accountability: * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making LLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * Misuse for malicious purposes: Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy][prohibited-use]. * Privacy violations: Models were trained on data filtered for removal of PII (Personally Identifiable Information). Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives. [rai-toolkit]: https://ai.google.dev/responsible [kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2 [terms]: https://ai.google.dev/gemma/terms [vertex-mg-gemma]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335 [sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference [safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11 [prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy [tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu [sustainability]: https://sustainability.google/operating-sustainably/ [jax]: https://github.com/google/jax [ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ [sustainability]: https://sustainability.google/operating-sustainably/ [foundation-models]: https://ai.google/discover/foundation-models/ [gemini-2-paper]: https://goo.gle/gemma2report [mmlu]: https://arxiv.org/abs/2009.03300 [hellaswag]: https://arxiv.org/abs/1905.07830 [piqa]: https://arxiv.org/abs/1911.11641 [socialiqa]: https://arxiv.org/abs/1904.09728 [boolq]: https://arxiv.org/abs/1905.10044 [winogrande]: https://arxiv.org/abs/1907.10641 [commonsenseqa]: https://arxiv.org/abs/1811.00937 [openbookqa]: https://arxiv.org/abs/1809.02789 [arc]: https://arxiv.org/abs/1911.01547 [triviaqa]: https://arxiv.org/abs/1705.03551 [naturalq]: https://github.com/google-research-datasets/natural-questions [humaneval]: https://arxiv.org/abs/2107.03374 [mbpp]: https://arxiv.org/abs/2108.07732 [gsm8k]: https://arxiv.org/abs/2110.14168 [realtox]: https://arxiv.org/abs/2009.11462 [bold]: https://arxiv.org/abs/2101.11718 [crows]: https://aclanthology.org/2020.emnlp-main.154/ [bbq]: https://arxiv.org/abs/2110.08193v2 [winogender]: https://arxiv.org/abs/1804.09301 [truthfulqa]: https://arxiv.org/abs/2109.07958 [winobias]: https://arxiv.org/abs/1804.06876 [math]: https://arxiv.org/abs/2103.03874 [agieval]: https://arxiv.org/abs/2304.06364 [big-bench]: https://arxiv.org/abs/2206.04615 [toxigen]: https://arxiv.org/abs/2203.09509
null
Non_BioNLP
# OpenVINO IR model with int4 quantization Model definition for LocalAI: ``` name: gemma-2-9b-it backend: transformers parameters: model: fakezeta/gemma-2-9b-it-ov-int4 context_size: 8192 type: OVModelForCausalLM template: use_tokenizer_template: true ``` To run the model directly with LocalAI: ``` local-ai run huggingface://fakezeta/gemma-2-9b-it-ov-int4/model.yaml ``` # Gemma 2 model card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs) **Resources and Technical Documentation**: * [Responsible Generative AI Toolkit][rai-toolkit] * [Gemma on Kaggle][kaggle-gemma] * [Gemma on Vertex Model Garden][vertex-mg-gemma] **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-9b-it) **Authors**: Google ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights for both pre-trained variants and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Usage Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with: ```sh pip install -U transformers ``` Then, copy the snippet from the section that is relevant for your usecase. #### Running with the `pipeline` API ```python import torch from transformers import pipeline pipe = pipeline( "text-generation", model="google/gemma-2-9b-it", model_kwargs={"torch_dtype": torch.bfloat16}, device="cuda", # replace with "mps" to run on a Mac device ) messages = [ {"role": "user", "content": "Who are you? Please, answer in pirate-speak."}, ] outputs = pipe(messages, max_new_tokens=256) assistant_response = outputs[0]["generated_text"][-1]["content"].strip() print(assistant_response) # Ahoy, matey! I be Gemma, a digital scallywag, a language-slingin' parrot of the digital seas. I be here to help ye with yer wordy woes, answer yer questions, and spin ye yarns of the digital world. So, what be yer pleasure, eh? 🦜 ``` #### Running the model on a single / multi GPU ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-9b-it", device_map="auto", torch_dtype=torch.bfloat16, ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=32) print(tokenizer.decode(outputs[0])) ``` You can ensure the correct chat template is applied by using `tokenizer.apply_chat_template` as follows: ```python messages = [ {"role": "user", "content": "Write me a poem about Machine Learning."}, ] input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda") outputs = model.generate(**input_ids, max_new_tokens=256) print(tokenizer.decode(outputs[0])) ``` <a name="precisions"></a> #### Running the model on a GPU using different precisions The native weights of this model were exported in `bfloat16` precision. You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below. * _Upcasting to `torch.float32`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-9b-it", device_map="auto", ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=32) print(tokenizer.decode(outputs[0])) ``` #### Running the model through a CLI The [local-gemma](https://github.com/huggingface/local-gemma) repository contains a lightweight wrapper around Transformers for running Gemma 2 through a command line interface, or CLI. Follow the [installation instructions](https://github.com/huggingface/local-gemma#cli-usage) for getting started, then launch the CLI through the following command: ```shell local-gemma --model 9b --preset speed ``` #### Quantized Versions through `bitsandbytes` <details> <summary> Using 8-bit precision (int8) </summary> ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-9b-it", quantization_config=quantization_config, ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=32) print(tokenizer.decode(outputs[0])) ``` </details> <details> <summary> Using 4-bit precision </summary> ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-9b-it", quantization_config=quantization_config, ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=32) print(tokenizer.decode(outputs[0])) ``` </details> #### Advanced Usage <details> <summary> Torch compile </summary> [Torch compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) is a method for speeding-up the inference of PyTorch modules. The Gemma-2 model can be run up to 6x faster by leveraging torch compile. Note that two warm-up steps are required before the full inference speed is realised: ```python import os os.environ["TOKENIZERS_PARALLELISM"] = "false" from transformers import AutoTokenizer, Gemma2ForCausalLM from transformers.cache_utils import HybridCache import torch torch.set_float32_matmul_precision("high") # load the model + tokenizer tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it") model = Gemma2ForCausalLM.from_pretrained("google/gemma-2-9b-it", torch_dtype=torch.bfloat16) model.to("cuda") # apply the torch compile transformation model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True) # pre-process inputs input_text = "The theory of special relativity states " model_inputs = tokenizer(input_text, return_tensors="pt").to("cuda") prompt_length = model_inputs.input_ids.shape[1] # set-up k/v cache past_key_values = HybridCache( config=model.config, max_batch_size=1, max_cache_len=model.config.max_position_embeddings, device=model.device, dtype=model.dtype ) # enable passing kv cache to generate model._supports_cache_class = True model.generation_config.cache_implementation = None # two warm-up steps for idx in range(2): outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128) past_key_values.reset() # fast run outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` For more details, refer to the [Transformers documentation](https://huggingface.co/docs/transformers/main/en/llm_optims?static-kv=basic+usage%3A+generation_config). </details> ### Chat Template The instruction-tuned models use a chat template that must be adhered to for conversational use. The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet. Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction: ```py from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model_id = "google/gemma-2-9b-it" dtype = torch.bfloat16 tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, device_map="cuda", torch_dtype=dtype,) chat = [ { "role": "user", "content": "Write a hello world program" }, ] prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) ``` At this point, the prompt contains the following text: ``` <bos><start_of_turn>user Write a hello world program<end_of_turn> <start_of_turn>model ``` As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity (either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with the `<end_of_turn>` token. You can follow this format to build the prompt manually, if you need to do it without the tokenizer's chat template. After the prompt is ready, generation can be performed like this: ```py inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150) print(tokenizer.decode(outputs[0])) ``` ### Inputs and outputs * **Input:** Text string, such as a question, a prompt, or a document to be summarized. * **Output:** Generated English-language text in response to the input, such as an answer to a question, or a summary of a document. ### Citation ```none @article{gemma_2024, title={Gemma}, url={https://www.kaggle.com/m/3301}, DOI={10.34740/KAGGLE/M/3301}, publisher={Kaggle}, author={Gemma Team}, year={2024} } ``` ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens and the 9B model was trained with 8 trillion tokens. Here are the key components: * Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. Primarily English-language content. * Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code or understand code-related questions. * Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. The combination of these diverse data sources is crucial for training a powerful language model that can handle a wide variety of different tasks and text formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content. * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. * Additional methods: Filtering based on content quality and safety in line with [our policies][safety-policies]. ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using the latest generation of [Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p). Training large language models requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: * Performance: TPUs are specifically designed to handle the massive computations involved in training LLMs. They can speed up training considerably compared to CPUs. * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. * These advantages are aligned with [Google's commitments to operate sustainably][sustainability]. ### Software Training was done using [JAX][jax] and [ML Pathways][ml-pathways]. JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for [foundation models][foundation-models], including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models][gemini-2-paper]; "the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow." ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: | Benchmark | Metric | Gemma PT 9B | Gemma PT 27B | | ------------------------------ | ------------- | ----------- | ------------ | | [MMLU][mmlu] | 5-shot, top-1 | 71.3 | 75.2 | | [HellaSwag][hellaswag] | 10-shot | 81.9 | 86.4 | | [PIQA][piqa] | 0-shot | 81.7 | 83.2 | | [SocialIQA][socialiqa] | 0-shot | 53.4 | 53.7 | | [BoolQ][boolq] | 0-shot | 84.2 | 84.8 | | [WinoGrande][winogrande] | partial score | 80.6 | 83.7 | | [ARC-e][arc] | 0-shot | 88.0 | 88.6 | | [ARC-c][arc] | 25-shot | 68.4 | 71.4 | | [TriviaQA][triviaqa] | 5-shot | 76.6 | 83.7 | | [Natural Questions][naturalq] | 5-shot | 29.2 | 34.5 | | [HumanEval][humaneval] | pass@1 | 40.2 | 51.8 | | [MBPP][mbpp] | 3-shot | 52.4 | 62.6 | | [GSM8K][gsm8k] | 5-shot, maj@1 | 68.6 | 74.0 | | [MATH][math] | 4-shot | 36.6 | 42.3 | | [AGIEval][agieval] | 3-5-shot | 52.8 | 55.1 | | [BIG-Bench][big-bench] | 3-shot, CoT | 68.2 | 74.9 | | ------------------------------ | ------------- | ----------- | ------------ | ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Text-to-Text Content Safety: Human evaluation on prompts covering safety policies including child sexual abuse and exploitation, harassment, violence and gore, and hate speech. * Text-to-Text Representational Harms: Benchmark against relevant academic datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq]. * Memorization: Automated evaluation of memorization of training data, including the risk of personally identifiable information exposure. * Large-scale harm: Tests for "dangerous capabilities," such as chemical, biological, radiological, and nuclear (CBRN) risks. ### Evaluation Results The results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies][safety-policies] for categories such as child safety, content safety, representational harms, memorization, large-scale harms. On top of robust internal evaluations, the results of well-known safety benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA are shown here. #### Gemma 2.0 | Benchmark | Metric | Gemma 2 IT 9B | Gemma 2 IT 27B | | ------------------------ | ------------- | --------------- | ---------------- | | [RealToxicity][realtox] | average | 8.25 | 8.84 | | [CrowS-Pairs][crows] | top-1 | 37.47 | 36.67 | | [BBQ Ambig][bbq] | 1-shot, top-1 | 88.58 | 85.99 | | [BBQ Disambig][bbq] | top-1 | 82.67 | 86.94 | | [Winogender][winogender] | top-1 | 79.17 | 77.22 | | [TruthfulQA][truthfulqa] | | 50.27 | 51.60 | | [Winobias 1_2][winobias] | | 78.09 | 81.94 | | [Winobias 2_2][winobias] | | 95.32 | 97.22 | | [Toxigen][toxigen] | | 39.30 | 38.42 | | ------------------------ | ------------- | --------------- | ---------------- | ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open Large Language Models (LLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. * Content Creation and Communication * Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. * Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. * Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. * Research and Education * Natural Language Processing (NLP) Research: These models can serve as a foundation for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field. * Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. * Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations * Training Data * The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. * The scope of the training dataset determines the subject areas the model can handle effectively. * Context and Task Complexity * LLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). * Language Ambiguity and Nuance * Natural language is inherently complex. LLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * Factual Accuracy * LLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * Common Sense * LLMs rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of large language models (LLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * LLMs trained on large-scale, real-world text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * LLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit][rai-toolkit]. * Transparency and Accountability: * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making LLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * Misuse for malicious purposes: Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy][prohibited-use]. * Privacy violations: Models were trained on data filtered for removal of PII (Personally Identifiable Information). Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives. [rai-toolkit]: https://ai.google.dev/responsible [kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2 [terms]: https://ai.google.dev/gemma/terms [vertex-mg-gemma]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335 [sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference [safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11 [prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy [tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu [sustainability]: https://sustainability.google/operating-sustainably/ [jax]: https://github.com/google/jax [ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ [sustainability]: https://sustainability.google/operating-sustainably/ [foundation-models]: https://ai.google/discover/foundation-models/ [gemini-2-paper]: https://goo.gle/gemma2report [mmlu]: https://arxiv.org/abs/2009.03300 [hellaswag]: https://arxiv.org/abs/1905.07830 [piqa]: https://arxiv.org/abs/1911.11641 [socialiqa]: https://arxiv.org/abs/1904.09728 [boolq]: https://arxiv.org/abs/1905.10044 [winogrande]: https://arxiv.org/abs/1907.10641 [commonsenseqa]: https://arxiv.org/abs/1811.00937 [openbookqa]: https://arxiv.org/abs/1809.02789 [arc]: https://arxiv.org/abs/1911.01547 [triviaqa]: https://arxiv.org/abs/1705.03551 [naturalq]: https://github.com/google-research-datasets/natural-questions [humaneval]: https://arxiv.org/abs/2107.03374 [mbpp]: https://arxiv.org/abs/2108.07732 [gsm8k]: https://arxiv.org/abs/2110.14168 [realtox]: https://arxiv.org/abs/2009.11462 [bold]: https://arxiv.org/abs/2101.11718 [crows]: https://aclanthology.org/2020.emnlp-main.154/ [bbq]: https://arxiv.org/abs/2110.08193v2 [winogender]: https://arxiv.org/abs/1804.09301 [truthfulqa]: https://arxiv.org/abs/2109.07958 [winobias]: https://arxiv.org/abs/1804.06876 [math]: https://arxiv.org/abs/2103.03874 [agieval]: https://arxiv.org/abs/2304.06364 [big-bench]: https://arxiv.org/abs/2206.04615 [toxigen]: https://arxiv.org/abs/2203.09509
{"base_model": "google/gemma-2-9b", "library_name": "transformers", "license": "gemma", "pipeline_tag": "text-generation", "tags": ["conversational"]}
task
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
46,019
Shobhank-iiitdwd/Distilled-xlm-RoBERTa-base-squad2-QA
Shobhank-iiitdwd
question-answering
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "exbert", "multilingual", "dataset:squad_v2", "license:mit", "endpoints_compatible", "region:us" ]
2022-12-27T12:15:13Z
2022-12-27T12:21:19+00:00
15
1
--- datasets: - squad_v2 language: multilingual license: mit tags: - exbert thumbnail: https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg --- # deepset/xlm-roberta-base-squad2-distilled - haystack's distillation feature was used for training. deepset/xlm-roberta-large-squad2 was used as the teacher model. ## Overview **Language model:** deepset/xlm-roberta-base-squad2-distilled **Language:** Multilingual **Downstream-task:** Extractive QA **Training data:** SQuAD 2.0 **Eval data:** SQuAD 2.0 **Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system) **Infrastructure**: 1x Tesla v100 ## Hyperparameters ``` batch_size = 56 n_epochs = 4 max_seq_len = 384 learning_rate = 3e-5 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 temperature = 3 distillation_loss_weight = 0.75 ``` ## Usage ### In Haystack Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/): ```python reader = FARMReader(model_name_or_path="deepset/xlm-roberta-base-squad2-distilled") # or reader = TransformersReader(model_name_or_path="deepset/xlm-roberta-base-squad2-distilled",tokenizer="deepset/xlm-roberta-base-squad2-distilled") ``` For a complete example of ``deepset/xlm-roberta-base-squad2-distilled`` being used for [question answering], check out the [Tutorials in Haystack Documentation](https://haystack.deepset.ai/tutorials/first-qa-system) ### In Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "deepset/xlm-roberta-base-squad2-distilled" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Performance Evaluated on the SQuAD 2.0 dev set ``` "exact": 74.06721131980123% "f1": 76.39919553344667% ``` ## Authors **Timo Möller:** [email protected] **Julian Risch:** [email protected] **Malte Pietsch:** [email protected] **Michel Bartels:** [email protected] ## About us <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3"> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/> </div> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/> </div> </div> [deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc. Some of our other work: - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2) - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) ## Get in touch and join the Haystack community <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://haystack.deepset.ai">Documentation</a></strong>. We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community/join">Discord community open to everyone!</a></strong></p> [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
null
Non_BioNLP
# deepset/xlm-roberta-base-squad2-distilled - haystack's distillation feature was used for training. deepset/xlm-roberta-large-squad2 was used as the teacher model. ## Overview **Language model:** deepset/xlm-roberta-base-squad2-distilled **Language:** Multilingual **Downstream-task:** Extractive QA **Training data:** SQuAD 2.0 **Eval data:** SQuAD 2.0 **Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system) **Infrastructure**: 1x Tesla v100 ## Hyperparameters ``` batch_size = 56 n_epochs = 4 max_seq_len = 384 learning_rate = 3e-5 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 temperature = 3 distillation_loss_weight = 0.75 ``` ## Usage ### In Haystack Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/): ```python reader = FARMReader(model_name_or_path="deepset/xlm-roberta-base-squad2-distilled") # or reader = TransformersReader(model_name_or_path="deepset/xlm-roberta-base-squad2-distilled",tokenizer="deepset/xlm-roberta-base-squad2-distilled") ``` For a complete example of ``deepset/xlm-roberta-base-squad2-distilled`` being used for [question answering], check out the [Tutorials in Haystack Documentation](https://haystack.deepset.ai/tutorials/first-qa-system) ### In Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "deepset/xlm-roberta-base-squad2-distilled" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Performance Evaluated on the SQuAD 2.0 dev set ``` "exact": 74.06721131980123% "f1": 76.39919553344667% ``` ## Authors **Timo Möller:** [email protected] **Julian Risch:** [email protected] **Malte Pietsch:** [email protected] **Michel Bartels:** [email protected] ## About us <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3"> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/> </div> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/> </div> </div> [deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc. Some of our other work: - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2) - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) ## Get in touch and join the Haystack community <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://haystack.deepset.ai">Documentation</a></strong>. We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community/join">Discord community open to everyone!</a></strong></p> [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
{"datasets": ["squad_v2"], "language": "multilingual", "license": "mit", "tags": ["exbert"], "thumbnail": "https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg"}
task
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
46,020
polieste/fastAbs_large
polieste
summarization
[ "transformers", "pytorch", "tf", "t5", "text2text-generation", "summarization", "vi", "dataset:Yuhthe/vietnews", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-01-08T05:15:42Z
2024-01-10T04:12:39+00:00
17
0
--- datasets: - Yuhthe/vietnews language: vi license: mit tags: - summarization widget: - text: Input text. --- # fastAbs-large Finetuned on `vietnews` Abstractive Summarization ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM ​ tokenizer = AutoTokenizer.from_pretrained("polieste/fastAbs_large") model = AutoModelForSeq2SeqLM.from_pretrained("polieste/fastAbs_large") model.cuda() ​ sentence = "Input text" text = "vietnews: " + sentence + " </s>" encoding = tokenizer(text, return_tensors="pt") input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda") outputs = model.generate( input_ids=input_ids, attention_mask=attention_masks, max_length=512, early_stopping=True ) for output in outputs: line = tokenizer.decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True) print(line) ```
null
Non_BioNLP
# fastAbs-large Finetuned on `vietnews` Abstractive Summarization ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM ​ tokenizer = AutoTokenizer.from_pretrained("polieste/fastAbs_large") model = AutoModelForSeq2SeqLM.from_pretrained("polieste/fastAbs_large") model.cuda() ​ sentence = "Input text" text = "vietnews: " + sentence + " </s>" encoding = tokenizer(text, return_tensors="pt") input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda") outputs = model.generate( input_ids=input_ids, attention_mask=attention_masks, max_length=512, early_stopping=True ) for output in outputs: line = tokenizer.decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True) print(line) ```
{"datasets": ["Yuhthe/vietnews"], "language": "vi", "license": "mit", "tags": ["summarization"], "widget": [{"text": "Input text."}]}
task
[ "SUMMARIZATION" ]
46,021
Triangle104/EuroLLM-1.7B-Q5_K_M-GGUF
Triangle104
null
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "en", "de", "es", "fr", "it", "pt", "pl", "nl", "tr", "sv", "cs", "el", "hu", "ro", "fi", "uk", "sl", "sk", "da", "lt", "lv", "et", "bg", "no", "ca", "hr", "ga", "mt", "gl", "zh", "ru", "ko", "ja", "ar", "hi", "base_model:utter-project/EuroLLM-1.7B", "base_model:quantized:utter-project/EuroLLM-1.7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2024-12-09T10:08:18Z
2024-12-09T10:09:10+00:00
3
0
--- base_model: utter-project/EuroLLM-1.7B language: - en - de - es - fr - it - pt - pl - nl - tr - sv - cs - el - hu - ro - fi - uk - sl - sk - da - lt - lv - et - bg - 'no' - ca - hr - ga - mt - gl - zh - ru - ko - ja - ar - hi library_name: transformers license: apache-2.0 tags: - llama-cpp - gguf-my-repo --- # Triangle104/EuroLLM-1.7B-Q5_K_M-GGUF This model was converted to GGUF format from [`utter-project/EuroLLM-1.7B`](https://huggingface.co/utter-project/EuroLLM-1.7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/utter-project/EuroLLM-1.7B) for more details on the model. --- Model details: - This is the model card for the first pre-trained model of the EuroLLM series: EuroLLM-1.7B. You can also check the instruction tuned version: EuroLLM-1.7B-Instruct. Developed by: Unbabel, Instituto Superior Técnico, Instituto de Telecomunicações, University of Edinburgh, Aveni, University of Paris-Saclay, University of Amsterdam, Naver Labs, Sorbonne Université. Funded by: European Union. Model type: A 1.7B parameter multilingual transfomer LLM. Language(s) (NLP): Bulgarian, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, German, Greek, Hungarian, Irish, Italian, Latvian, Lithuanian, Maltese, Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish, Swedish, Arabic, Catalan, Chinese, Galician, Hindi, Japanese, Korean, Norwegian, Russian, Turkish, and Ukrainian. License: Apache License 2.0. Model Details The EuroLLM project has the goal of creating a suite of LLMs capable of understanding and generating text in all European Union languages as well as some additional relevant languages. EuroLLM-1.7B is a 1.7B parameter model trained on 4 trillion tokens divided across the considered languages and several data sources: Web data, parallel data (en-xx and xx-en), and high-quality datasets. EuroLLM-1.7B-Instruct was further instruction tuned on EuroBlocks, an instruction tuning dataset with focus on general instruction-following and machine translation. Model Description EuroLLM uses a standard, dense Transformer architecture: We use grouped query attention (GQA) with 8 key-value heads, since it has been shown to increase speed at inference time while maintaining downstream performance. We perform pre-layer normalization, since it improves the training stability, and use the RMSNorm, which is faster. We use the SwiGLU activation function, since it has been shown to lead to good results on downstream tasks. We use rotary positional embeddings (RoPE) in every layer, since these have been shown to lead to good performances while allowing the extension of the context length. For pre-training, we use 256 Nvidia H100 GPUs of the Marenostrum 5 supercomputer, training the model with a constant batch size of 3,072 sequences, which corresponds to approximately 12 million tokens, using the Adam optimizer, and BF16 precision. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/EuroLLM-1.7B-Q5_K_M-GGUF --hf-file eurollm-1.7b-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/EuroLLM-1.7B-Q5_K_M-GGUF --hf-file eurollm-1.7b-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/EuroLLM-1.7B-Q5_K_M-GGUF --hf-file eurollm-1.7b-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/EuroLLM-1.7B-Q5_K_M-GGUF --hf-file eurollm-1.7b-q5_k_m.gguf -c 2048 ```
null
Non_BioNLP
# Triangle104/EuroLLM-1.7B-Q5_K_M-GGUF This model was converted to GGUF format from [`utter-project/EuroLLM-1.7B`](https://huggingface.co/utter-project/EuroLLM-1.7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/utter-project/EuroLLM-1.7B) for more details on the model. --- Model details: - This is the model card for the first pre-trained model of the EuroLLM series: EuroLLM-1.7B. You can also check the instruction tuned version: EuroLLM-1.7B-Instruct. Developed by: Unbabel, Instituto Superior Técnico, Instituto de Telecomunicações, University of Edinburgh, Aveni, University of Paris-Saclay, University of Amsterdam, Naver Labs, Sorbonne Université. Funded by: European Union. Model type: A 1.7B parameter multilingual transfomer LLM. Language(s) (NLP): Bulgarian, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, German, Greek, Hungarian, Irish, Italian, Latvian, Lithuanian, Maltese, Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish, Swedish, Arabic, Catalan, Chinese, Galician, Hindi, Japanese, Korean, Norwegian, Russian, Turkish, and Ukrainian. License: Apache License 2.0. Model Details The EuroLLM project has the goal of creating a suite of LLMs capable of understanding and generating text in all European Union languages as well as some additional relevant languages. EuroLLM-1.7B is a 1.7B parameter model trained on 4 trillion tokens divided across the considered languages and several data sources: Web data, parallel data (en-xx and xx-en), and high-quality datasets. EuroLLM-1.7B-Instruct was further instruction tuned on EuroBlocks, an instruction tuning dataset with focus on general instruction-following and machine translation. Model Description EuroLLM uses a standard, dense Transformer architecture: We use grouped query attention (GQA) with 8 key-value heads, since it has been shown to increase speed at inference time while maintaining downstream performance. We perform pre-layer normalization, since it improves the training stability, and use the RMSNorm, which is faster. We use the SwiGLU activation function, since it has been shown to lead to good results on downstream tasks. We use rotary positional embeddings (RoPE) in every layer, since these have been shown to lead to good performances while allowing the extension of the context length. For pre-training, we use 256 Nvidia H100 GPUs of the Marenostrum 5 supercomputer, training the model with a constant batch size of 3,072 sequences, which corresponds to approximately 12 million tokens, using the Adam optimizer, and BF16 precision. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/EuroLLM-1.7B-Q5_K_M-GGUF --hf-file eurollm-1.7b-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/EuroLLM-1.7B-Q5_K_M-GGUF --hf-file eurollm-1.7b-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/EuroLLM-1.7B-Q5_K_M-GGUF --hf-file eurollm-1.7b-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/EuroLLM-1.7B-Q5_K_M-GGUF --hf-file eurollm-1.7b-q5_k_m.gguf -c 2048 ```
{"base_model": "utter-project/EuroLLM-1.7B", "language": ["en", "de", "es", "fr", "it", "pt", "pl", "nl", "tr", "sv", "cs", "el", "hu", "ro", "fi", "uk", "sl", "sk", "da", "lt", "lv", "et", "bg", "no", "ca", "hr", "ga", "mt", "gl", "zh", "ru", "ko", "ja", "ar", "hi"], "library_name": "transformers", "license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"]}
task
[ "TRANSLATION" ]
46,022
EllieChoi/klue-roberta-base-klue-sts-mrc
EllieChoi
sentence-similarity
[ "sentence-transformers", "safetensors", "roberta", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:17552", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:shangrilar/klue-roberta-base-klue-sts", "base_model:finetune:shangrilar/klue-roberta-base-klue-sts", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2025-01-09T05:41:25Z
2025-01-09T05:42:15+00:00
6
0
--- base_model: shangrilar/klue-roberta-base-klue-sts library_name: sentence-transformers metrics: - pearson_cosine - spearman_cosine pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:17552 - loss:MultipleNegativesRankingLoss widget: - source_sentence: 김치의 숙성정도를 알아내는 센서가 있는 김치냉장고 이름은? sentences: - "알코올 의존증은 생리학적 원인과 심리학적 원인에 의해 심화된다. 체내의 알코올 대사 과정에서 발생하는 테트라하이드로 이소퀴놀란스는 모르핀과\ \ 비슷한 구조를 갖는 물질로 동물 실험에서 알코올 기호를 높이는 것이 확인됐다. 이 물질로 인해 알코올 의존증 환자는 술을 끊을 경우 오한,\ \ 구토, 무력감 등의 금단 증상을 겪는다. 또한 알코올 의존증은 심리적 원인에도 많은 영향을 받는데, 가족의 사랑을 받지 못해 의존적인 성향을\ \ 보이는 사람들이 보다 알코올 의존증에 취약하다. \n\n신체적 원인은, 1950년대 초반에 캘리포니아 공과대학 연구소의 연구 결과를 통한\ \ 인체의 뇌에 쾌락 중추 연구 결과 따르면, 신경해부학적로 볼때 알코올이 오피 오이드(아편이라는 물질 과 화학성분이 유사)라는 물질이 분비를\ \ 촉진하여, 알코올에 의해서 자극을 통해서, 지속적인 쾌감을 느끼려는 욕구에 의해서 술에 대한 중독이 심화 된다는 것이다. \n유전적으로\ \ 부모가 알코올 의존상태일때는 그 자녀는 4배 이상 알코올 의존증환자가되며, 심리적으로는 현실에 대한 불안이나 억압 또는 부정적인것을 잊어버리기\ \ 위한 보상을 받으려는 욕구로 알코올을 섭취한다. \n\n알코올 의존의 원인이 되는 알코올 남용과 알코올 의존의 경우에는 다음과 같이 나누어진다.\ \ \n*‘알코올남용’의 경우 알코올에 대한 의존이 심한것으로 반복적 음주로 직장 및 가정에서 역할을 다하지 못하거나, 신체적으로 건강등에\ \ 위험한 상황에서 반복적으로 음주한다.\n* ‘알코올의존’은 예전만큼 음주해서는 알코올의 효과가 감소되어 음주량이 더 많아 지는것 및 술을\ \ 줄이거나 끊었을 때 나타나는 불안, 불면, 설사, 환청, 환시, 간질발작과 같은 증상등이 나타나는데 이런 증상을 없애기 위해 술을 또 마시는것등이다." - 광주광역시 광산구의 삼성전자 냉장고 생산라인. 18일 찾은 이곳에선 요즘 김장철을 맞아 19초마다 1대씩, 매일 2000여대의 김치냉장고를 생산하고 있었다. 생산라인을 총괄하는 김광덕 상무는 “11월엔 하루 10시간씩 2교대로 일해도 소화하지 못할 만큼 물량이 몰리고 있다”고 말했다.삼성이 올해 김치냉장고 시장에 내놓은 승부수는 ‘메탈’이다. 김치를 장독에 담아 땅속에 묻는 것과 같은 수준으로 온도 변화를 최소화하기 위해 김치냉장고 칸칸을 알루미늄 스테인리스 등 금속으로 감쌌다. 제품명을 ‘메탈 그라운드’로 정한 것도 그래서다. 이를 통해 냉장고 문을 여닫을 때도 내부 온도 변화를 ±0.3도 이내로 유지할 수 있게 했다. 온도 편차가 0.2도만 커져도 김치가 15%나 더 숙성하기 때문이라고 삼성 측은 설명했다.김치냉장고 개발팀이 금속 재질을 적용하기로 결정한 것은 지난 1월이었다. 지난해 내놓은 제품은 김치통 밖에 캡슐을 하나 더 씌우는 방법으로 정온성을 유지했다. 하지만 소비자 입장에선 김치를 꺼내기 위해 캡슐에 이어 김치통을 다시 열어야 해 불편했다. 그래서 캡슐을 없애는 대신 냉기 전달력이 뛰어난 금속을 쓰기로 한 것이다.이때부터 김치와의 전쟁이 시작됐다. 처음 시도하는 것이다 보니 어떤 두께로 어떤 소재의 금속을 써야 김치를 익히는 데 가장 좋은지 알 수가 없었다. 또 다양해진 소비자 입맛에 맞추기 위해서는 배추김치 외에도 묵은지, 깍두기, 저염 김치 등 다양한 김치에 맞는 숙성기술을 확보해야 했다. 결국 먹어 보는 수밖에 없었다. 개발팀은 금속판 두께, 온도 등 다양한 변수를 설정하고 직접 김치를 익힌 뒤 시식했다. 이날 방문한 연구실에도 수십대의 냉장고에 갖가지 김치들이 보관돼 있었다. 이명주 책임연구원은 “김치를 먹고 빵 같은 것을 씹어 입을 씻어낸 뒤 다시 김치를 먹기를 몇 달이나 반복했다”며 “실험용으로 쓴 김치가 5t이 넘는다”고 말했다. 결국 여섯 종류의 김치를 익히는 최적의 기술을 개발해 제품에 담았다.개발팀은 여기에 그치지 않고 보관통마다 담겨 있는 김치가 어느 정도 익었는지 볼 수 있는 센서도 달기로 했다. 설익은 김치를 좋아하는 사람도 있고 조금 쉰 듯한 맛을 즐기는 사람도 있기 때문이다. 팀은 연구를 통해 김치가 익을수록 초산이라는 성분을 많이 뿜어낸다는 것을 알아냈다. ‘메탈 그라운드’에 포함된 김치통에는 이 센서가 각각 달려 있다.김치냉장고는 국내에서만 팔리는 제품이지만 시장 규모는 작지 않다. 매년 1조원어치의 김치냉장고가 팔리고 있다는 게 업계 분석이다. 특히 올해는 풍년으로 배추 가격이 크게 떨어진 데다 교체 수요가 몰리면서 예년보다 시장 규모가 10% 이상 커질 것으로 전망하고 있다. 삼성을 비롯한 각 사가 치열한 김치냉장고 신제품 경쟁을 벌이는 이유다. - "그러나 한편으로는 미국 사회의 인종차별주의적, 흑인을 차별하는 태도를 목격하면서 백인들의 오만함에 대한 부정적인 생각을 갖게 되었다. 미국\ \ 체류 중 황인종을 멸시하는 백인 불량배들에게 끌려가 가끔 얻어맞기도 하고, “유색 인종”이라는 이유로 호텔 투숙을 거절당해 정거장에서 밤을\ \ 지샜는가 하면, 세례 교인이었던 그와 가장 가까워야 할 미국인 선교사에게마저 늘 은근히 - 그리고 가끔은 매우 노골적으로 - “왕따”당하는\ \ 처지였다 귀국 이후에 조선에서 만난 미국인 선교사들도 비슷하게 그를 대우했다.\n\n당시의 그에 대해 후일 노르웨이 오슬로 대학 교수 박노자는\ \ '백인 인종주의에 상처받아 만신창이가 되었을 그의 마음 상태 '를 지적하기도 했다.\n\n만약 내가 마음대로 내 고국을 선택할 수 있다면,\ \ 나는 일본을 선택할 것이다. 오, 축복받은 일본이여! 동방의 낙원이여! |1893년 11월 1일자 일기 \n\n그는 평소 조선인들의 불결한\ \ 위생과 겉치레, 감정적 대응 등을 내심 경멸해왔다. 그리고 서구의 기독교사상과 일한 만큼만 대가를 받는다는 청교도 정신, 합리주의의 수용을\ \ 통해 이를 개선하려 했다. 그러나 평소 기독교선교사들로부터도 자기 일 처리도 못하는 작은 아이, 원주민, 예의를 지키지 않아도 될 만한\ \ 사람의 대접을 늘 받아 온 윤치호는, 인종주의야말로 미국의 가장 중요한 특징이라는 결론을 내리게 되었다 그 뒤 그는 조선에도 민주주의와\ \ 자유주의, 기독교 정신의 도입이 시급하다고 봤다. 그러나 미국의 민주주의와 자유주의, 기독교 사상과 개척정신은 높이 평가하면서도 내심 백인종을\ \ 혐오하는 이중적인 태도와 인종주의적인 사고를 갖게 되었다.\n\n어느 교회에서 남부 출신 남감리교회 목사들이 예배 시간에 흑인을 박멸해야\ \ 된다는 설교, 흑인들을 아프리카로 추방해야 된다는 설교를 듣고는 충격을 받기도 했다 그들이 목사인가 기독교인인가 자체를 의심하기도 했다.\ \ 하지만 흑인이 백인들로부터 차별대우를 받는 것에는 분노하면서도 흑인에 대해서는 1893년 2월 17일자 일기에 '(아프리카인들이 미국에\ \ 끌려와) 영어를 배운 것만으로도 그들의 노예생활에 대해 충분히 보상받은 것이다. 라는 이중적인 시각을 갖게 된다. 귀국 이후 그는 국내\ \ 인사들이 미국을 무조건적으로 의존하거나 일제와는 다른 선량한 국가일 것이라는 생각을 비판, 경계하기도 했다." - source_sentence: 다음 해 우리나라 국채의 규모는 얼마인가? sentences: - 국회 예산결산특별위원회는 오는 30일 ‘2015년도 예산안 및 기금운용계획안에 대한 공청회’를 열고 각계 전문가로부터 내년도 예산안에 대한 의견을 듣는다. 이어 11월6일 전체회의를 시작으로 본격적인 예산 심의에 들어간다. 7~8일에는 정홍원 국무총리와 최경환 부총리 겸 기획재정부 장관이 출석한 가운데 예산안에 대한 정부 측 제안 설명과 예결위 전문위원의 검토 보고를 듣고 종합 정책질의를 할 예정이다.예결특위는 10~13일 부별 심사를 거쳐 16일부터 예산안심사 소위원회를 가동하고 30일 전체회의에서 예산안을 의결해 본회의로 넘긴다는 계획이다. 올해부터는 개정 국회법(일명 국회선진화법)에 따라 11월 안에 내년도 예산안 심의를 마치지 못하면 12월1일 본회의에 정부 원안이 자동 부의(토의에 부침)된다. 여야 모두 헌법상 예산안 처리 시한인 12월2일을 준수하겠다는 방침이지만 현행법상 여야 원내대표 합의로 이를 연기할 수 있도록 돼 있어 올해도 예산안 처리 시한을 넘길 가능성을 배제할 수 없다.정부가 국회에 제출한 내년도 예산안 규모는 총 376조원으로 올해에 비해 5.7%(20조2000억원) 늘었다. 국민연금 등을 제외한 관리재정수지 적자가 올해 25조5000억원에서 내년에는 33조6000억원으로 확대될 전망이다. 내년 국가채무도 국내총생산(GDP)의 35.7%인 570조1000억원으로 급증해 사상 최고치를 기록하게 된다.정부·여당은 이에 대해 활력을 잃어가고 있는 한국 경제를 되살리기 위한 불가피한 조치라고 보고 있으나 야당 측은 무리한 재정 운용으로 미래 세대에 부담을 떠넘겨서는 안 된다고 지적한다. 국회 기획재정위원회 소속 한 새정치민주연합 의원은 “이렇게 무리한 적자재정을 편성한 것은 아무리 좋게 봐주려고 해도 내후년 총선을 겨냥해 경기를 부양하려는 의도라는 의심을 지울 수 없다”며 “정말 한국 경제의 구조적 문제를 해결하려고 한다면 일시적 처방이 아닌 잠재성장률 자체를 높이기 위한 정책을 내놨어야 한다”고 말했다.예산안은 물론 법안 심사 과정에서도 여야 간 한판 승부가 불가피하다. 대표적 쟁점 법안인 세월호 특별법, 정부조직법, ‘유병언법’ 등 ‘세월호 3법’과 여권이 사활을 걸고 추진 중인 경제활성화 관련 법, 정부가 연내 입법에 드라이브를 걸고 있는 공무원연금 개혁안 등을 놓고 여야가 첨예하게 대립하고 있기 때문이다. 특히 정부·여당이 최우선 입법 과제로 꼽고 있는 서비스산업발전기본법, 의료법 개정안, 관광진흥법, 자본시장법, 크루즈법, 마리나 항만법, 소규모 주택임대 수입에 대해 소득세를 낮추는 소득세법 개정안, 월세 임차인에 대한 세제지원 확대 등을 담은 조세특례제한법 개정안, 분양가 상한제를 탄력 적용하는 주택법 개정안, 재건축 초과이익 환수폐지 법안 등 30여개에 달하는 경제활성화 법이 국회에 묶여 있는 상태다.김현숙 새누리당 원내대변인은 “안타깝게도 새정치연합은 민생과 복지를 얘기하면서 그 핵심인 경제활성화는 외면하고 있다”며 “질타만 하지 말고 관련 법을 꼼꼼히 살펴보고 반드시 함께 처리해 대한민국이 저성장의 굴레에서 벗어나는 기회로 삼아야 한다”고 주장했다. - '당간지주는 사찰 입구에 설치해 두는 것으로, 절에 행사나 의식이 있을 때 이곳에 당이라는 깃발을 걸어둔다. 이 깃발을 매달아두는 길쭉한 장대를 당간이라 하며, 이를 양쪽에서 지탱해 주는 두 돌기둥을 당간지주라 한다. 드물게 당간이 있으나 대부분은 두 기둥만 남아 있다. 이 당간지주는 만덕동에서 제1 만덕터널로 들어가는 진입로 아래의 만덕사터에 서 있다. 두 기둥 중에서 한쪽 기둥만 남아 있으며, 기둥 옆으로는 당간의 받침돌로 보이는 작은 돌이 놓여 있다. 바깥면은 가운데에 도드라진 선을 한 줄 새겨놓았고, 안쪽면 꼭대기에는 네모난 홈을 파서 당간을 고정시키도록 하였다. 맨 위는 안쪽에서 바깥쪽으로 2단의 굴곡을 주면서 둥글게 깎아두었다. 각 부분의 돌을 다듬는 솜씨가 인근의 범어사 당간지주와 비교하여 매우 세련되어 있다. 비록 한쪽 기둥만 남아 있으나, 땅위에 드러난 높이가 길어 그 사이에 당간이 놓였다면 더욱 더 웅장했을 것으로 보인다. 만덕사가 창건된 시기와 때를 같이하여 고려 전기에 만든 것으로 보이며, 만덕사 옛터의 경계를 정하는데 있어 좋은 자료가 되고 있다.' - '전년도에 2년 연속 일본 시리즈 정상에 오른 세이부 라이온스의 모리 마사아키 감독이 퍼시픽 리그 올스타팀을 지휘했고 센트럴 리그 우승을 이끈 요미우리 자이언츠의 오 사다하루 감독이 센트럴 리그 올스타팀의 지휘를 맡았다. 1987년부터 MVP 수상자에게 상금과 상품을 증정해 온 산요 전기가 이 해부터 특별 협찬 기업이 되면서 대회명도 ‘산요 올스타 게임’(サンヨーオールスターゲーム)이라는 명칭이 변경됐다(2006년까지). 지붕을 씌우고 좌우 양쪽의 100m 넓이를 가지는 도쿄 돔이 개장하면서 ‘야구의 질이 달라졌다’라고 말할 정도였고, 시즌 종료와 함께 긴 역사를 가졌던 한큐 브레이브스와 난카이 호크스가 매각되면서 프로 야구도 새로운 시대를 맞이하려 하고 있었다. 7월 23일로 치를 예정이었던 1차전이 우천으로 인해서 24일로 연기됐다. 1차전에서는 결과적으로 올스타전 최종 개최가 되는 한큐 니시노미야 구장에서 개최돼 니시노미야 구장의 홈팀인 한큐의 호시노 노부유키가 퍼시픽 올스타팀의 선발 투수로서 등판하여 책임 투구 이닝을 무실점으로 틀어 막았고 한큐의 주력 타자인 부머 웰스가 대형 홈런을 날리는 등 역사 깊은 구단의 화려한 해체와 같은 전개가 이뤄졌다. 2차전에서는 나고야 구장에서 개최됐는데 오 사다하루를 동경하여 프로에 입문한 고졸 신인 다쓰나미 가즈요시가 선발 출전했고, 당시에는 지명타자가 올스타전에 도입돼 있지 않아서 가도타 히로미쓰가 3번 타자 겸 우익수로서 선발 출전했다. 6회에 오카다 아키노부(한신)의 역전 홈런으로 센트럴 올스타팀은 1승 1패를 기록했고 오카다는 2차전 MVP에 선정됐다. 그리고 최종전인 3차전에서는 개장한 지 4개월 된 도쿄 돔에서 처음으로 올스타전이 개최됐는데 당시 도쿄 돔을 홈구장으로 사용하고 있던 닛폰햄 파이터스의 에이스 니시자키 유키히로가 퍼시픽 올스타팀의 선발 투수로서 3이닝을 퍼펙트(2회말에는 클린업에게서 3자 연속 삼진, 3회말 2사에 대타 다쓰나미로부터 5개째의 삼진) 기록을 세웠다. 1986년 이후 2년 만에 연장전으로 들어가면서 무승부 직전의 12회말, 야수를 전부 기용한 센트럴 올스타팀은 투수 미즈노 가쓰히토를 대타로 보냈는데 이것이 끝내기 희생 플라이가 되면서 센트럴 올스타팀이 쇼와 시대의 마지막 올스타전을 승리로 매듭지었다. 3차전이 열린 도쿄 돔에서는 1988년도 야구 명예의 전당 시상식이 거행됐는데 센트럴 올스타팀의 감독이자 요미우리를 지휘하고 있는 오 사다하루의 후임으로 유력하게 거론돼던 나가시마 시게오가 그해 야구 명예의 전당의 헌액자로서 등장했다(그 후 히로오카 다쓰로를 임명하려 했으나 고사했고 후지타 모토시가 재부임). 사다하루·나가시마의 ‘ON 콤비’가 올스타전 무대에서 동시에 등장한 것은 1979년(나가시마는 감독, 사다하루는 현역 선수) 이후의 일이었고 이들 두 사람이 양대 리그의 감독으로서 올스타전에 나란히 등장한 것은 그로부터 13년 후인 2001년이다. 그 해에는 워렌 크로마티의 대역으로 요미우리에서 대활약한 뤼밍츠가 감독 추천으로 올스타전에 첫 출전하여 전 경기에 선발로 출전했지만 무안타에 그쳤다.' - source_sentence: 정치하는 엄마들이 피해자들이 정상적인 일상생활을 할 수 있도록 돕기 위해 진행했던 서비스는? sentences: - "스쿨미투는 학교에서 일어난 아동 성폭력을 아동청소년들 스스로 고발하며 공론화의 주체가 된 인권운동이다. 2018년 4월 용화여고 졸업생들이\ \ 재학 중에 겪었던 학교 성폭력을 공론화하자, 재학생들이 교실 창문에 포스트잇으로 #METOO #WITHYOU 모양으로 붙여 화답하며 한국의\ \ 고질적인 학교 성폭력 실태 고발이 들불처럼 이어졌다. 오랫동안 교사들이 저질러온 성폭행, 성추행, 성희롱, 성차별 발언들이 만연한 교육현장의\ \ 추악한 민낯은 한국 사회에 큰 충격을 주었다. 2018년 10월 국정감사에서 ‘스쿨미투’가 언급될 만큼 피해 학생들의 제보와 폭로가 이어져\ \ 사태의 심각성이 입증되었지만, 처벌은 미온적이고 가해교사들은 교단으로 부메랑처럼 복귀하는 등 교육 현장은 개선되지 않았다. 더 이상의 피해를\ \ 막기 위해 용기 냈던 학생들은 주동자 색출 위협과 소외와 조롱을 견디며 보호받지 못했다. 학교 및 교육 당국의 더디고 미흡한 조치로 공론화\ \ 당사자들은 일상으로 돌아오지 못하고 힘겹게 학업을 이어가거나 학교를 떠나기도 했다. 한국사회는 마땅히 안전해야 할 교육 현장에서 아동학대•\ \ 아동인권 침해 사안이 일어나도 문제 해결의 결정적 주체인 학교가 은폐• 축소하면 전혀 알 수 없는 끔찍한 현실을 마주하게 되었다.\n\n\ \n이에 정치하는엄마들은 피해 당사자들이 하루빨리 일상으로 복귀하길 바라는 마음으로 무료 법률지원을 시작했다. 2018년 2월부터 스쿨미투를\ \ 공론화한 전국 79개교 중 연락 가능한 49개교 트위터 계정들에 안내를 시작으로 피해 당사자들이 학교, 교육청, 경찰 등과 겪고 있는 갈등\ \ 조정을 이어가고 있다. 더불어 대한민국의 교육현장에서 오랫동안 자행되어 왔던 아동 인권 침해의 벽을 뚫고 나온, 아동 청소년들의 절박하고\ \ 용기있는 목소리들을 바탕으로 스쿨미투 전국지도를 제작하여 온라인 상에 공개하고 있다.\n\n\n피해자 회복과 가해 재발 방지를 위해 투명한\ \ 행정처리와 정보공개는 필수이다. 이에 2019년 3월 정치하는엄마들은 제주를 제외한 전국 16개 시도교육청에 스쿨미투 현황 파악을 위한\ \ 정보공개청구를 하였다. 그러나 대부분의 시도교육청으로부터 비공개 답변을 받았다. 정보공개 청구 당시 23개교로 학생들이 가장 많이 고발한\ \ 지역 관할인 서울시교육청의 불투명한 정보 비공개, 부존재로 답변하는 등 소극행정에 대하여 정치하는엄마들은 서울시교육청 상대로 2019년\ \ 5월 스승의 날을 맞아 정보공개거부 철회 행정소송 제기했다. 2020년 3월 5일 서울행정법원은 교사 실명 등 개인정보를 가리고 스쿨미투\ \ 처리결과를 공개하라는 일부승소 판결을 내렸다. 그러나 서울시교육청은 정보가 공개될 경우 배제징계를 받은 교사가 교단에 설 수 있을지 걱정된다며,\ \ 교사의 사생활 보호를 이유로 2020년 3월 20일 항소하여 올바른 교육문화 정책의 방향과 아동인권을 후퇴시킨다는 비판을 받고 있다. \n\ \n\n스쿨미투는2019년 3월 한국을 찾은 트위터 CEO 잭 도시가 불의에 맞서는 한국 청소년들이 공론의 장으로 트위터를 선택해준 걸 자랑스럽게\ \ 여기며 높이 평가할 정도로 2018년 한국 인권운동의 한 획을 그은 역사적인 일이다. 동시에 2019년 유엔 아동인권위원회 본회의 안건으로\ \ 채택될 만큼 국제사회가 한국에게 그 후속대책과 국가적 책임을 묻고 있는 중대한 사안이다. 학교 성폭력 해결은 현재를 살아가고 있는 아동뿐만\ \ 아니라 오랫동안 위계에 갇혀 부당한 권력에 침묵했던 시민들 스스로가 인권침해 상황을 견뎌왔던 존재였음을 인정하는 것부터 그 첫걸음이 될\ \ 것이다.\n\n정치하는엄마들은 아이들과 함께 살아가는 사람들로서, 교육 현장에 만연해 있는 성폭력 해결을 통해 그 누구도 차별 받지 않고\ \ 존재 그대로 존중받는 세상을 만들기 위해 노력하고 있다." - '그녀는 경기도 용인의 한 이교도 집안에서 태어났다. 그녀는 경기도 봉천리에 있는 한 이교도 이씨 집안으로 시집왔고, 중년에 시누이 이매임 테레사의 권유로 천주교에 입교했다. 그녀는 그녀의 남편이 개종했으면 하는 바람은 이루지 못했지만, 자식들이 개종하는데에는 일조하였다. 후일에 그녀의 딸들 중 이영희 막달레나는 1839년 7월 20일에 순교했고, 같은 해 9월 3일에는 이정희 바르바라도 순교하였다. 1839년 3월에 그녀는 성사를 받으러 한양으로 올라왔다. 그녀는 시누이 이매임과 두 딸 이영희와 이정희가 사는 집에 머물던 중 교우 남명혁 다미아노와 이광헌 아우구스티노의 용감한 순교 소식을 계기로, 두 딸과 시누이, 동료 김성임 마르타 그리고 김 루치아 등과 함께 자수를 결심하고 1839년 4월 11일에 남명혁 다미아노의 집을 경계하던 포졸들에게 묵주를 보이며 천주교 신자임을 알리고 자수하였다. 허계임의 옥중 생활에 대해서는 많은 기록이 남아 있지 않은데, 그녀는 격심한 고문을 받았지만, 용기와 인내로 온갖 고통을 견뎌냈다는 것이 그것에 대해 알려진 대부분이다. 그녀의 용기는 그녀의 두 딸 이영희와 이정희의 용기만큼 위대했다. 그 두 순교자의 어머니 또한 순교자가 된 것이다. 그 영광스러운 순교자 가족은 하느님의 은총이다. 허계임은 1839년 9월 26일에 서소문 바깥에서 여덟 명의 교우와 함께 참수되었다. 그렇게 그녀는 67세의 나이로 순교하였다.' - 대림성모병원(병원장 김성원)이 ‘코로나19로 인한 5대 암(위암‧대장암‧자궁경부암‧간암‧유방암) 진단 지연 실태’에 대한 분석 결과를 발표했다. 암은 발견 시기에 따라 생존율이 크게 달라지기 때문에 조기 발견이 매우 중요하다. 대림성모병원은 코로나19로 인해 낮아진 5대 암 진단 실태를 알리고 빠른 검진의 중요성을 강조하고자 이번 조사를 실시했다. ■ 올해 3~5월 5대 암 진단 건수, 전년 동기 대비 5000여건 감소 대림성모병원이 국민건강보험공단에 등록된 위암, 대장암, 자궁경부암, 간암, 유방암 산정특례 건수를 분석한 결과, 2020년 3~5월 건수가 전년 동기간 대비 5,146건 감소한 것으로 나타났으며, 이는 2019년에 비해 21.4% 감소한 수치이다. 해당 건수에는 산정특례 종료 시점에서 재발 또는 전이로 기간을 연장하는 경우는 제외했기 때문에 해당 암을 처음 진단받은 환자 수가 전년 대비 크게 낮아진 것을 알 수 있다. 위암 등록건수는 2019년 6,823건에서 2020년 4,735건으로 2,088건 감소해 30.6%가 낮아졌다. 대장암은 6,765건에서 5,325건으로 1,440건(21.3% 감소), 자궁경부암은 846건에서 632건으로 214건(25.3% 감소), 간암은 3,800건에서 3,276건으로 524건(13.8% 감소), 유방암은 5,789건에서 4,909건으로 880건(15.2% 감소)이 줄어든 것으로 나타났다. ■ 코로나19로 5대 암 수검율 큰 폭 하락 이처럼 코로나19가 발병한 이후 3개월간 2020년 3~5월 5대 암 환자 수는 전년 대비해 5,000여 명이 감소한 것으로 나타났다. 이는 수많은 암 환자가 진단되지 않고 있다는 것을 의미한다. 암 발견 시기는 생존율에 큰 영향을 미치기 때문에 조기검진을 통한 빠른 치료는 무엇보다 중요하다. 대림성모병원은 진단 건수뿐만 아니라 실제 검진율이 얼마나 감소했는지 알아보기 위해 본원의 5대 암 검진 건수를 분석했다. 그 결과 올해 2~5월에 5대 암 검진 건수가 작년 동기간 대비해 52.8%가 감소한 것으로 나타났다. 대림성모병원 김성원 병원장은 “코로나19로 인한 검진율 감소는 본원뿐만 아니라 전국의 모든 병원이 체감하고 있을 것이다.”라며 “코로나19 2차 대유행 및 장기화를 대비하기 위해 사회적 거리두기를 통한 생활 방역이 매우 중요한 시기이다. 하지만 암 검진을 늦추는 것은 진단 지연으로 암이 전이되어 심각한 결과를 초래할 수 있기 때문에 검진 대상자는 제때 검진을 받고 암 증상이 있다면 조금이라도 빨리 병원을 방문하기를 바란다.”라고 말했다. - source_sentence: 전자태그 갱신제와 함께 도입되는 제도는? sentences: - 왓챠는 ‘제1회 왓챠 시리즈 각본 공모전’ 결과 대상을 수상한 전설 작가의 <공단>을 포함해 총 5편의 수상작을 선정했다고 29일 발표했다. <공단>은 개성공단에서 일하던 북한 여인과 남한 여인이 서로와의 교감을 통해 성장하고 연대하며 희망과 사랑을 찾게 되는 이야기다. 대상에 선정된 전설작가에게는 상금 2천만원이 수여된다. 우수상에는 전효정 작가의 <웬수의 네 번째 웨딩>과 김도연 작가의 <다시 좀비가 될 순 없어>, 두 편이 선정됐다. 우수상 수상 작가에게는 1천만원의 상금이 수여된다. <웬수의 네 번째 웨딩>은 웨딩플래너인 주인공이 짝사랑하는 남자의 새아빠가 될 자신의 철없는 친아빠의 네 번째 결혼식 웨딩플래닝을 맡게 되면서 미처 몰랐던 진정한 사랑을 찾아가는 성장 스토리다. <다시 좀비가 될 순 없어>는 좀비에게 물리는 것보다 좀비 확진자였다는 사회적 낙인이 더 무서운 세상에서 좀비 확진자였던 주인공이 회사라는 공간에서 생존해가는 이야기다. 장려상에는 조유진 작가의 <완벽한 하루>와 안세화 작가의 <손을 잡아줘>가 선정됐다. 두 작가에게는 상금 5백만원이 수여된다. 총 상금 5천만원 규모로 올해 처음으로 진행된 ‘왓챠 시리즈 각본 공모전’에는 600편 넘는 작품이 응모됐다. - 화장품을 팔기 위해선 정신감정을 받아야 한다. 메이크업 전문업체를 내려면 미용사 자격증이 있어야 한다. 일회용 이쑤시개도 개별 포장지에 제조연월을 모두 표시해야 한다…. 덩어리 규제만 경제의 발목을 잡는 것이 아니다. 은퇴 후 창업 등 자영업 전선에 나선 사람들은 전혀 예기치 않은 규제와 맞닥뜨려 곤란을 겪는 경우가 많다. 25일 한국경제신문이 자영업 규제 실태를 파악한 결과 소상공인의 창업을 가로막는 안전·위생 규제는 수십 가지에 달했다. 대표적인 게 화장품 제조·유통업체를 운영하려면 정신감정을 받아야 한다는 화장품법 조항이다. 식품의약품안전처는 소비자 안전을 위해 필요한 조항이라고 설명하지만 이것이 과연 필요한 규제인지는 논란이 분분하다. 업계의 한 관계자는 “제품 안전을 위해 하필이면 정신감정을 의무화해야 할 이유가 뭔지 모르겠다”며 “정말 황당해 하는 사람이 많다”고 말했다. A5면에 계속 고은이 기자 [email protected] 한경 기업신문고 무역협회·전경련·중기중앙회와 함께 운영합니다 - 서울시가 승용차요일제 가입 차량에 제공하던 자동차세 5% 감면 혜택을 내년부터 폐지한다. 2003년 승용차요일제가 도입된 지 11년 만에 가입 혜택을 축소하는 것이다. ▶본지 2월10일자 A31면 참조서울시는 “승용차요일제의 실효성을 높이기 위해 전문가 간담회와 시민공청회 등을 거쳐 이 같은 방안을 확정했다”고 27일 발표했다. 승용차요일제는 월~금요일 5일 중 시민 스스로 쉬는 날을 정하고 승용차요일제 전자태그를 차량에 부착한 뒤 해당 요일에 차량을 운행하지 않으면 혜택을 주는 제도다. 지난달 말 기준으로 서울 등록차량 237만대 중 33%인 79만대가 가입돼 있다.서울시는 그동안 승용차요일제 가입 차량에 대해 남산 혼잡통행료 50%, 공영주차장 요금 30% 할인과 자동차세 5% 감면 등 다양한 인센티브를 제공해왔다. 그러나 일부 가입자가 혜택만 받고 전자태그를 떼거나 운휴일을 위반하는 사례가 잇따라 제도 정비의 필요성이 제기됐다. 서울시는 시내 19곳에 전자태그 리더기를 설치해 요일제 위반 차량을 적발했지만 이들 지점을 제외하면 사실상 단속이 어렵다. 서울시는 한 해 3회 이상 운휴일을 위반한 차량 5만여대를 적발했지만 실제 위반 차량은 수십만대에 달할 것으로 보고 있다.이와 함께 전자태그 갱신제도 오는 9월부터 시행한다. 전자태그 발급일로부터 만 5년이 지난 뒤 90일 이내에 태그를 재발급받지 않으면 자동 탈퇴 처리한다. 다만 도입 초기 혼란 방지를 위해 2006~2010년에 발급된 전자태그(약 60만장)에 한해 2016년 3월30일까지 재발급받는 것을 허용한다.대신 서울시는 자기 승용차를 평상시보다 덜 몰아 주행거리를 줄이면 인센티브를 지급하는 ‘드라이빙 마일리지제’를 도입, 9월부터 시범운영에 들어갈 방침이다. 서울시는 보험사와 협의해 조만간 구체적인 인센티브 방안을 마련할 계획이다. - source_sentence: JD월드와이드보다 먼저 해외 직구 사이트를 만든 회사는? sentences: - 중국의 1, 2위 전자상거래업체 알리바바와 JD닷컴이 급성장하는 해외 직접구매(직구) 시장을 놓고 맞붙었다.16일 사우스차이나모닝포스트(SCMP)에 따르면 JD닷컴은 지난 15일 해외 직구 전용 사이트 JD월드와이드(www.jd.hk)를 개설해 본격적인 영업에 들어갔다. 중국 소비자는 이 사이트에서 중국에 직접 진출하지 않은 기업의 제품도 온라인으로 구매할 수 있다. 현재 버버리 랄프로렌 리바이스 보스 에코 등 패션 브랜드와 시세이도 랑콤 등 화장품 브랜드가 제품을 판매하고 있다. JD닷컴 관계자는 “미국 호주 프랑스 일본 한국 등의 다수 기업이 참여 의사를 밝혔다”며 “향후 세계 1200개 브랜드 제품을 판매할 것”이라고 설명했다. 앞서 알리바바의 B2C 전자상거래 사이트인 티몰은 작년 2월 해외직구 전문 사이트 티몰글로벌(www.tmall.hk)을 열었다. 이곳에는 한국의 이마트 롯데마트 LG생활건강 등도 입점해 있다.중국 전자상거래연구센터에 따르면 중국의 해외 직구 시장은 2013년 13조원이던 것이 작년에는 27조원으로 커졌으며, 2018년에는 418조원 규모로 급팽창할 것으로 예상되고 있다. - 달라스튜디오가 신규 웹예능 ‘배달그라운드’로 뜨거운 인기를 이어간다. 글로벌 미디어 기업 에이앤이 코리아(A+E Korea)의 유튜브 채널 ‘달라스튜디오’가 2020년 돌풍을 일으킨 ‘네고왕’과 ‘발명왕’의 인기를 이어갈 신작 웹예능 ‘배달그라운드’를 오늘(12일) 저녁 6시 30분에 첫 공개한다. ‘배달그라운드’는 찾아가는 본격 전국 배달맛집 리뷰 웹예능으로, 언택트 시대에 걸맞는 먹방 리뷰 콘텐츠로 다시 한번 에이앤이 디지털 스튜디오만의 ‘남다른’ 행보를 이어갈 예정. ‘배달그라운드’는 전국 방방곡곡 숨은 배달맛집들을 발굴하고, 직접 배달 가능 지역으로 찾아가 ‘어디서든’ 100% 솔직한 먹방 리뷰를 전한다. 특히 길바닥부터 모텔 바닥까지 배달 가능 지역에 어디서든 안전하게 먹을 수 있는 곳을 찾아 무작정 펼치는 이색 먹방을 펼칠 예정. 언택트 시대로 각광받고 있는 ‘배달 맛집 투어’라는 신선한 컨셉과 SNL에서 싱크로율 100%를 자랑하는 이광수 얼굴묘사로 ‘보급형 이광수’ 라는 타이틀을 얻은 배우 겸 개그맨 이명훈이라는 신선한 캐스팅으로 더욱 기대를 북돋고 있다. 12일(목) 공개될 ‘배달그라운드’ 1화에서는 정제되지 않은 예능원석 이명훈이 거침없는 배달 맛집 투어의 첫 시작을 알린다. “얼굴은 이광수, 이름은 임영웅을 닮았다”는 닮은 꼴 부자 이명훈의 자유분방한 동네투어부터 제작진과의 남다른 티격태격 케미 폭발로 첫 화부터 새로운 웃음 폭탄의 등장을 알릴 예정. “‘동네왕’으로 이름 변경하면 안 돼요?”라며 왕이 되고 싶다는 야심을 드러낸 이명훈과 콘텐츠 재미를 위한 제작진들의 고군분투 케미가 ‘배달그라운드’ 유행을 예고한다. 에이앤이 코리아 디지털 스튜디오에서 선보인 유튜브 채널 ‘달라스튜디오’는 ‘네고왕’, ‘발명왕’ 등 메가 히트 웹예능을 잇달아 선보이고 있다. 런칭 3개월만에 구독자 100만 명 돌파하는 등 독보적인 화제성과 파급력을 입증하고 있는 달라스튜디오는 '배달그라운드'를 공개해 흥행 돌풍을 이어갈 계획이다. 찾아가는 배달 맛집 전국 투어가 시작되는 '배달그라운드' 1화는 12일(목) 저녁 6시 30분 달라스튜디오에서 공개된다. ‘배달그라운드’는 매주 목요일 저녁 6시 30분 유튜브 채널 달라스튜디오에서 공개될 예정이다. - '다우(NYSE: DOW)가 지속가능한 패키징의 상용화에 앞장서고자 세계적인 플라스틱 필름 포장재 제조사인 통 구안(Thong Guan)과 협력해 지속 가능 자원 기반의 폴리에틸렌 수지로 만든 식품용 랩 ‘나노 바이오(Nano Bio)’를 아시아태평양 지역에 출시한다고28일 밝혔다. 통 구안이 출시한 ‘나노 바이오’는 다우의 저밀도 폴리에틸렌(LDPE) 수지인 ELITE™ 5230GC R Enhanced Polyethylene Resin를 기반으로 제작된다. 이 수지는 종이 생산 과정에서 발생하는 톨유(Tall oil)를 활용한 친환경 포장재로, 석유화학 원료를 기반으로 한 수지의 필름 생산 과정과 비교했을 때 탁월한 성능을 유지하면서도 현저히 낮은 양의 이산화탄소를 배출하는 것이 특징이다. 아울러 다우의 친환경 수지는 다른 지속 가능 자원 기반 제품과 달리 생태계 먹이사슬에 영향을 끼치지 않는 것이 특징이다. 지속가능성을 고려해 종이 생산시의 부산물을 사용하여 추가적인 토지 개발도 필요로 하지 않는다. 또한 통 구안의 랩 필름은 첨단 나노기술을 적용한 다운게이징(downgauging) 필름으로 기존 랩과 동일한 강도, 유연성, 내구성을 갖추고 있어 우수한 성능과 지속가능성을 동시에 추구할 수 있게 해준다. 또한 다우는 매스 밸런스 접근법(mass balance approach)을 적용해 기존의 석유 기반 원료를 이용한 생산시설에서 바이오 기반 원료를 사용할 수 있게 함으로써 별도 생산시설 구축에 따른 환경 비용을 줄이고, 생산자들로 하여금 더 많은 바이오 기반 원료를 이용할 수 있도록 돕는다. 다우 아시아태평양 지역 패키징 및 특수 플라스틱 사업부 총괄 부사장인 밤방 캔드라(Bambang Candra)는 “최근 새롭게 발표한 지속가능성 목표를 달성하기 위해 다우는 더 적은 자원으로 제품을 생산할 수 있는 기술을 개발, 고객들로 하여금 탄소 배출량을 감소할 수 있도록 지원하고 있다” 며 “다우 아태지역 최초의 바이오 기반 폴리에틸렌 출시를 시작으로 지속가능한 포장재 상용화와 플라스틱의 순환경제를 이뤄 나갈 수 있기를 기대한다”고 말했다. 통 구안의 다토 앙 푼 추안(Dato'' Ang Poon Chuan) 매니징 디렉터는 “다우와의 협력은 지속가능한 환경과 자원 보존을 위한 다양한 노력의 일환”이라며 “다우의 친환경 수지를 기반으로 제작된 통 구안의 ‘나노 바이오’는 플라스틱 사용량은 줄이면서 우수한 성능은 그대로 유지해 고객사들의 만족도를 충족시켜줄 수 있는 뛰어난 제품”이라고 밝혔다. 한편, 종이 생산 과정에서 발생하는 톨유로부터 추출한 바이오 나프타(bio-naphtha)를 이용해 생산되는 다우의 저밀도 폴리에틸렌 수지는 국제 친환경 바이오 연료 인증(ISCC, International Sustainability & Carbon Certificate)을 획득한 바 있다. 이러한 생산 방식은 복잡한 제조 혹은 생산 시스템 내에서 지속가능한 원료의 공급을 원활히 함으로써 업계의 지속가능성을 향상시키고, 모든 단계의 추적가능성 기준(traceability criteria)을 충족시킨다.' model-index: - name: SentenceTransformer based on shangrilar/klue-roberta-base-klue-sts results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: Unknown type: unknown metrics: - type: pearson_cosine value: 0.8053055366603666 name: Pearson Cosine - type: spearman_cosine value: 0.8150473226218842 name: Spearman Cosine --- # SentenceTransformer based on shangrilar/klue-roberta-base-klue-sts This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [shangrilar/klue-roberta-base-klue-sts](https://huggingface.co/shangrilar/klue-roberta-base-klue-sts). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [shangrilar/klue-roberta-base-klue-sts](https://huggingface.co/shangrilar/klue-roberta-base-klue-sts) <!-- at revision 7198ee8bcb0a1028d0d8cb4e645fdccafdfa0d5c --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("sentence_transformers_model_id") # Run inference sentences = [ 'JD월드와이드보다 먼저 해외 직구 사이트를 만든 회사는?', '중국의 1, 2위 전자상거래업체 알리바바와 JD닷컴이 급성장하는 해외 직접구매(직구) 시장을 놓고 맞붙었다.16일 사우스차이나모닝포스트(SCMP)에 따르면 JD닷컴은 지난 15일 해외 직구 전용 사이트 JD월드와이드(www.jd.hk)를 개설해 본격적인 영업에 들어갔다. 중국 소비자는 이 사이트에서 중국에 직접 진출하지 않은 기업의 제품도 온라인으로 구매할 수 있다. 현재 버버리 랄프로렌 리바이스 보스 에코 등 패션 브랜드와 시세이도 랑콤 등 화장품 브랜드가 제품을 판매하고 있다. JD닷컴 관계자는 “미국 호주 프랑스 일본 한국 등의 다수 기업이 참여 의사를 밝혔다”며 “향후 세계 1200개 브랜드 제품을 판매할 것”이라고 설명했다. 앞서 알리바바의 B2C 전자상거래 사이트인 티몰은 작년 2월 해외직구 전문 사이트 티몰글로벌(www.tmall.hk)을 열었다. 이곳에는 한국의 이마트 롯데마트 LG생활건강 등도 입점해 있다.중국 전자상거래연구센터에 따르면 중국의 해외 직구 시장은 2013년 13조원이던 것이 작년에는 27조원으로 커졌으며, 2018년에는 418조원 규모로 급팽창할 것으로 예상되고 있다.', "달라스튜디오가 신규 웹예능 ‘배달그라운드’로 뜨거운 인기를 이어간다. 글로벌 미디어 기업 에이앤이 코리아(A+E Korea)의 유튜브 채널 ‘달라스튜디오’가 2020년 돌풍을 일으킨 ‘네고왕’과 ‘발명왕’의 인기를 이어갈 신작 웹예능 ‘배달그라운드’를 오늘(12일) 저녁 6시 30분에 첫 공개한다. ‘배달그라운드’는 찾아가는 본격 전국 배달맛집 리뷰 웹예능으로, 언택트 시대에 걸맞는 먹방 리뷰 콘텐츠로 다시 한번 에이앤이 디지털 스튜디오만의 ‘남다른’ 행보를 이어갈 예정. ‘배달그라운드’는 전국 방방곡곡 숨은 배달맛집들을 발굴하고, 직접 배달 가능 지역으로 찾아가 ‘어디서든’ 100% 솔직한 먹방 리뷰를 전한다. 특히 길바닥부터 모텔 바닥까지 배달 가능 지역에 어디서든 안전하게 먹을 수 있는 곳을 찾아 무작정 펼치는 이색 먹방을 펼칠 예정. 언택트 시대로 각광받고 있는 ‘배달 맛집 투어’라는 신선한 컨셉과 SNL에서 싱크로율 100%를 자랑하는 이광수 얼굴묘사로 ‘보급형 이광수’ 라는 타이틀을 얻은 배우 겸 개그맨 이명훈이라는 신선한 캐스팅으로 더욱 기대를 북돋고 있다. 12일(목) 공개될 ‘배달그라운드’ 1화에서는 정제되지 않은 예능원석 이명훈이 거침없는 배달 맛집 투어의 첫 시작을 알린다. “얼굴은 이광수, 이름은 임영웅을 닮았다”는 닮은 꼴 부자 이명훈의 자유분방한 동네투어부터 제작진과의 남다른 티격태격 케미 폭발로 첫 화부터 새로운 웃음 폭탄의 등장을 알릴 예정. “‘동네왕’으로 이름 변경하면 안 돼요?”라며 왕이 되고 싶다는 야심을 드러낸 이명훈과 콘텐츠 재미를 위한 제작진들의 고군분투 케미가 ‘배달그라운드’ 유행을 예고한다. 에이앤이 코리아 디지털 스튜디오에서 선보인 유튜브 채널 ‘달라스튜디오’는 ‘네고왕’, ‘발명왕’ 등 메가 히트 웹예능을 잇달아 선보이고 있다. 런칭 3개월만에 구독자 100만 명 돌파하는 등 독보적인 화제성과 파급력을 입증하고 있는 달라스튜디오는 '배달그라운드'를 공개해 흥행 돌풍을 이어갈 계획이다. 찾아가는 배달 맛집 전국 투어가 시작되는 '배달그라운드' 1화는 12일(목) 저녁 6시 30분 달라스튜디오에서 공개된다. ‘배달그라운드’는 매주 목요일 저녁 6시 30분 유튜브 채널 달라스튜디오에서 공개될 예정이다.", ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:----------| | pearson_cosine | 0.8053 | | **spearman_cosine** | **0.815** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 17,552 training samples * Columns: <code>sentence_0</code> and <code>sentence_1</code> * Approximate statistics based on the first 1000 samples: | | sentence_0 | sentence_1 | |:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 9 tokens</li><li>mean: 17.84 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 241 tokens</li><li>mean: 433.76 tokens</li><li>max: 512 tokens</li></ul> | * Samples: | sentence_0 | sentence_1 | |:------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>보금자리주택 사업을 유지하겠다고 한 인물은?</code> | <code>대도시 주변 그린벨트(개발제한구역) 해제지역에 들어서는 대규모 주택단지 ‘보금자리’. 이 주택 브랜드가 도입된 건 2009년이다. 이명박 정부가 2018년까지 주택 150만가구를 공급하는 것을 목표로 시작한 장기 주택정책 사업이다. 지역별로는 수도권 100만가구, 지방 50만가구이며 유형별로는 분양 70만가구, 임대 80만가구 등이다.수도권에서는 서울 강남, 서울 서초, 고양 원흥, 하남 미사가 보금자리 시범지구(1차지구)로 선정된 데 이어 서울 세곡2, 서울 내곡, 남양주 진건, 시흥 인계, 구리 갈매, 부천 옥길 등이 2차 지구로 뽑혔다. 3차에서는 광명·시흥을 비롯해 서울 항동, 인천 구월 등이 지정되는 등 총 6차에 걸쳐 30여만가구가 들어서는 보금자리지구가 선정됐다. 2018년까지 150만가구를 공급하는 게 목표인 가운데 2009~2012년 4년 동안 사업 승인을 받은 물량은 36만가구로 당초 계획의 59.16%에 그쳤다. 수도권은 54.71%, 지방은 68.06%였다. 착공 및 준공 실적은 더 떨어진다. 같은 기간 착공 실적은 전국 8만1492가구로 당초 계획의 13.58%에 불과했다. 준공 실적도 1만가구(1.68%)에 머물렀다. 국회 국토교통위원회 소속 오병윤 통합진보당 의원(광주 서구을)이 지난해 국정감사 때 보금자리주택사업을 총괄 분석한 결과다. 보금자리주택은 이번 정부가 들어서면서 추진력을 더 잃었다는 평가가 나온다. 서승환 국토교통부 장관은 지난해 3월 취임 기자간담회에서 “앞으로 대규모 개발을 통한 주택 공급은 바람직하지 않다”고 말해 보금자리주택 사업의 일반 분양을 축소하거나 중단하겠다는 뜻을 밝혔다. 현 정부는 도심의 확장을 막는 그린벨트를 해제한 데다 2008년 글로벌 금융위기 이후 침체된 민간 주택시장을 교란시킨다는 지적을 받아온 보금자리의 이름을 ‘공공주택’으로 고치는 등 보금자리의 흔적 지우기에 나섰다.</code> | | <code>바이마르 정부가 화폐를 수없이 찍어낸 이유는?</code> | <code>1919년부터 1933년까지 독일은 제 1차 세계대전 이후 세워진 바이마르 공화국이 통치했다. 바이마르 공화국은 이원집정부제와 민주주의에 기반한 공화국이었다. 하지만 바이마르 공화국은 성립 직후부터 수많은 위기에 직면하게 된다. 초인플레이션과 정치적 혼란, 정당들의 분열과 1차 세계대전의 승전국들과의 껄끄러운 관계 등이 끊임없이 바이마르 공화국의 목을 잡았던 것이다. 특히 1919년에 맺어진 베르사유 조약에서, 당시 패전국이었던 독일은 엄청난 양의 배상금을 물어주어야했는데, 바이마르 정부는 이 배상금을 주기 위하여 대규모로 돈을 찍어냈다. 정부가 지나치게 통화 유통량을 증가시키자 시중에서는 당연히 대규모 인플레이션이 일어났고, 이 때문에 모든 상품 가격들이 상승하고 독일은 경제적인 대혼란에 빠져들었다. 게다가 이러한 노력에도 불구하고 바이마르 공화국이 프랑스에게 배상금을 갚는 데에 실패하자, 프랑스 군대는 루르에 있는 독일의 공장 지대를 강제로 점령하였다. 이는 독일 대중들의 격분을 불러왔고, 바이마르 공화국은 갈수록 위태로워져만 갔다.<br><br>국가사회주의 독일 노동자당(나치당)은 1920년에 창당되었다. 1년 전에 창당되었던 독일 노동자당을 이어 만들어졌으며, 당시 독일에서 활하던 극우 정당 들 중 하나였다. 나치당 강령에는 바이마르 공화국의 해체, 베르사유 조약 파기, 급진적인 반유대주의, 반공주의 등이 들어있었다. 이들은 국민들에게 강력한 중앙 정부를 약속했으며, 주거 문제를 안정시키겠다고 공언했으며 인종에 기반한 정책을 펼칠 것이라고 말했다. 또한 막대한 부를 축적하고 있던 유대인들과 같은 인종들을 쓸어내어 세계의 '인종적 순수성'을 지켜낼 것이는 망언을 하는 등 과격하고 급진적인 정책들을 쏟아내었다. 나치당은 돌격대를 창설하여 자신들에게 반대하는 사람들에게 폭행을 가하거나 죽였고, 거리에서 유대인들을 폭행하거나 상대 정당의 모임들에 쳐들어가며 공포 분위기를 조장하였다.<br><br>1929년 10월 24일, 미국의 주식 시장이 대폭락하자 이는 미국의 달러에 깊이 의존...</code> | | <code>오바마의 방일 일정을 연장하는데 기여한 것으로 보이는 규칙은?</code> | <code>버락 오바마 미국 대통령이 오는 25일 한국을 방문해 박근혜 대통령과 정상회담을 한다. 3일 외교 소식통에 따르면 오바마 대통령은 23일 2박3일간 일본을 국빈 방문한 뒤 25일 1박2일 일정으로 방한한다. 일본과 한국 모두 1박2일을 계획했으나 일본 일정을 하루 늘렸다. 민경욱 청와대 대변인은 “오바마 대통령이 1박 일정으로 방한한다”며 “구체적인 날짜와 일정은 미국과 협의 중”이라고 확답을 피했다.미국은 일본의 끈질긴 요청으로 막판에 일정을 재조정한 것으로 알려졌다. 일본은 오바마 대통령의 방문이 1996년 빌 클린턴 전 대통령 이후 18년 만에 이뤄지는 미국 대통령의 국빈 방문이라는 점에서 큰 의미를 부여하고 있다. 한 외교 소식통은 “일본은 지난달 백악관이 아시아 순방 계획을 발표했을 때부터 방문 일정 연장을 집요하게 요구해왔다”며 “최근에는 정상회담 외에도 일왕 예방과 만찬 행사 등을 위해 최소 2박3일이 필요하다고 주장해 미국이 막판에 일정을 하루 앞당겼다”고 설명했다. 이 같은 배경에는 최근 일본의 대미 정책이 주효했다는 평가가 나온다. 일본은 최근 무기 관련 기술 수출을 금지하는 원칙을 47년 만에 폐지했다. 미국 정부는 이날 “일본의 정책 개정을 환영한다”며 “미국 및 다른 파트너 국가와의 방산 협력 기회를 넓혀주고 절차를 간소화해줄 것”이라는 입장을 발표했다. 일본이 환태평양경제동반자협정(TPP) 협상에서 미국의 요구 조건을 일부 수용하는 등 한발 물러설 것이란 전망도 나온다. 오바마 대통령은 방한기간 박 대통령과 정상회담을 하고 교육·문화 이벤트, 주한미군 관련 행사에 참석할 것으로 알려졌다. 2009년 취임 이후 네 번째 한국 방문이다. 북핵 문제와 군사안보 분야에서 한·미 동맹 등이 주요 의제로 논의될 것으로 예상된다.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `num_train_epochs`: 1 - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: round_robin </details> ### Training Logs | Epoch | Step | Training Loss | spearman_cosine | |:------:|:----:|:-------------:|:---------------:| | 0 | 0 | - | 0.8150 | | 0.4558 | 500 | 0.1625 | - | | 0.9116 | 1000 | 0.1118 | - | ### Framework Versions - Python: 3.12.3 - Sentence Transformers: 3.3.1 - Transformers: 4.47.1 - PyTorch: 2.5.1+cu124 - Accelerate: 1.2.1 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SentenceTransformer based on shangrilar/klue-roberta-base-klue-sts This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [shangrilar/klue-roberta-base-klue-sts](https://huggingface.co/shangrilar/klue-roberta-base-klue-sts). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [shangrilar/klue-roberta-base-klue-sts](https://huggingface.co/shangrilar/klue-roberta-base-klue-sts) <!-- at revision 7198ee8bcb0a1028d0d8cb4e645fdccafdfa0d5c --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("sentence_transformers_model_id") # Run inference sentences = [ 'JD월드와이드보다 먼저 해외 직구 사이트를 만든 회사는?', '중국의 1, 2위 전자상거래업체 알리바바와 JD닷컴이 급성장하는 해외 직접구매(직구) 시장을 놓고 맞붙었다.16일 사우스차이나모닝포스트(SCMP)에 따르면 JD닷컴은 지난 15일 해외 직구 전용 사이트 JD월드와이드(www.jd.hk)를 개설해 본격적인 영업에 들어갔다. 중국 소비자는 이 사이트에서 중국에 직접 진출하지 않은 기업의 제품도 온라인으로 구매할 수 있다. 현재 버버리 랄프로렌 리바이스 보스 에코 등 패션 브랜드와 시세이도 랑콤 등 화장품 브랜드가 제품을 판매하고 있다. JD닷컴 관계자는 “미국 호주 프랑스 일본 한국 등의 다수 기업이 참여 의사를 밝혔다”며 “향후 세계 1200개 브랜드 제품을 판매할 것”이라고 설명했다. 앞서 알리바바의 B2C 전자상거래 사이트인 티몰은 작년 2월 해외직구 전문 사이트 티몰글로벌(www.tmall.hk)을 열었다. 이곳에는 한국의 이마트 롯데마트 LG생활건강 등도 입점해 있다.중국 전자상거래연구센터에 따르면 중국의 해외 직구 시장은 2013년 13조원이던 것이 작년에는 27조원으로 커졌으며, 2018년에는 418조원 규모로 급팽창할 것으로 예상되고 있다.', "달라스튜디오가 신규 웹예능 ‘배달그라운드’로 뜨거운 인기를 이어간다. 글로벌 미디어 기업 에이앤이 코리아(A+E Korea)의 유튜브 채널 ‘달라스튜디오’가 2020년 돌풍을 일으킨 ‘네고왕’과 ‘발명왕’의 인기를 이어갈 신작 웹예능 ‘배달그라운드’를 오늘(12일) 저녁 6시 30분에 첫 공개한다. ‘배달그라운드’는 찾아가는 본격 전국 배달맛집 리뷰 웹예능으로, 언택트 시대에 걸맞는 먹방 리뷰 콘텐츠로 다시 한번 에이앤이 디지털 스튜디오만의 ‘남다른’ 행보를 이어갈 예정. ‘배달그라운드’는 전국 방방곡곡 숨은 배달맛집들을 발굴하고, 직접 배달 가능 지역으로 찾아가 ‘어디서든’ 100% 솔직한 먹방 리뷰를 전한다. 특히 길바닥부터 모텔 바닥까지 배달 가능 지역에 어디서든 안전하게 먹을 수 있는 곳을 찾아 무작정 펼치는 이색 먹방을 펼칠 예정. 언택트 시대로 각광받고 있는 ‘배달 맛집 투어’라는 신선한 컨셉과 SNL에서 싱크로율 100%를 자랑하는 이광수 얼굴묘사로 ‘보급형 이광수’ 라는 타이틀을 얻은 배우 겸 개그맨 이명훈이라는 신선한 캐스팅으로 더욱 기대를 북돋고 있다. 12일(목) 공개될 ‘배달그라운드’ 1화에서는 정제되지 않은 예능원석 이명훈이 거침없는 배달 맛집 투어의 첫 시작을 알린다. “얼굴은 이광수, 이름은 임영웅을 닮았다”는 닮은 꼴 부자 이명훈의 자유분방한 동네투어부터 제작진과의 남다른 티격태격 케미 폭발로 첫 화부터 새로운 웃음 폭탄의 등장을 알릴 예정. “‘동네왕’으로 이름 변경하면 안 돼요?”라며 왕이 되고 싶다는 야심을 드러낸 이명훈과 콘텐츠 재미를 위한 제작진들의 고군분투 케미가 ‘배달그라운드’ 유행을 예고한다. 에이앤이 코리아 디지털 스튜디오에서 선보인 유튜브 채널 ‘달라스튜디오’는 ‘네고왕’, ‘발명왕’ 등 메가 히트 웹예능을 잇달아 선보이고 있다. 런칭 3개월만에 구독자 100만 명 돌파하는 등 독보적인 화제성과 파급력을 입증하고 있는 달라스튜디오는 '배달그라운드'를 공개해 흥행 돌풍을 이어갈 계획이다. 찾아가는 배달 맛집 전국 투어가 시작되는 '배달그라운드' 1화는 12일(목) 저녁 6시 30분 달라스튜디오에서 공개된다. ‘배달그라운드’는 매주 목요일 저녁 6시 30분 유튜브 채널 달라스튜디오에서 공개될 예정이다.", ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:----------| | pearson_cosine | 0.8053 | | **spearman_cosine** | **0.815** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 17,552 training samples * Columns: <code>sentence_0</code> and <code>sentence_1</code> * Approximate statistics based on the first 1000 samples: | | sentence_0 | sentence_1 | |:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 9 tokens</li><li>mean: 17.84 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 241 tokens</li><li>mean: 433.76 tokens</li><li>max: 512 tokens</li></ul> | * Samples: | sentence_0 | sentence_1 | |:------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>보금자리주택 사업을 유지하겠다고 한 인물은?</code> | <code>대도시 주변 그린벨트(개발제한구역) 해제지역에 들어서는 대규모 주택단지 ‘보금자리’. 이 주택 브랜드가 도입된 건 2009년이다. 이명박 정부가 2018년까지 주택 150만가구를 공급하는 것을 목표로 시작한 장기 주택정책 사업이다. 지역별로는 수도권 100만가구, 지방 50만가구이며 유형별로는 분양 70만가구, 임대 80만가구 등이다.수도권에서는 서울 강남, 서울 서초, 고양 원흥, 하남 미사가 보금자리 시범지구(1차지구)로 선정된 데 이어 서울 세곡2, 서울 내곡, 남양주 진건, 시흥 인계, 구리 갈매, 부천 옥길 등이 2차 지구로 뽑혔다. 3차에서는 광명·시흥을 비롯해 서울 항동, 인천 구월 등이 지정되는 등 총 6차에 걸쳐 30여만가구가 들어서는 보금자리지구가 선정됐다. 2018년까지 150만가구를 공급하는 게 목표인 가운데 2009~2012년 4년 동안 사업 승인을 받은 물량은 36만가구로 당초 계획의 59.16%에 그쳤다. 수도권은 54.71%, 지방은 68.06%였다. 착공 및 준공 실적은 더 떨어진다. 같은 기간 착공 실적은 전국 8만1492가구로 당초 계획의 13.58%에 불과했다. 준공 실적도 1만가구(1.68%)에 머물렀다. 국회 국토교통위원회 소속 오병윤 통합진보당 의원(광주 서구을)이 지난해 국정감사 때 보금자리주택사업을 총괄 분석한 결과다. 보금자리주택은 이번 정부가 들어서면서 추진력을 더 잃었다는 평가가 나온다. 서승환 국토교통부 장관은 지난해 3월 취임 기자간담회에서 “앞으로 대규모 개발을 통한 주택 공급은 바람직하지 않다”고 말해 보금자리주택 사업의 일반 분양을 축소하거나 중단하겠다는 뜻을 밝혔다. 현 정부는 도심의 확장을 막는 그린벨트를 해제한 데다 2008년 글로벌 금융위기 이후 침체된 민간 주택시장을 교란시킨다는 지적을 받아온 보금자리의 이름을 ‘공공주택’으로 고치는 등 보금자리의 흔적 지우기에 나섰다.</code> | | <code>바이마르 정부가 화폐를 수없이 찍어낸 이유는?</code> | <code>1919년부터 1933년까지 독일은 제 1차 세계대전 이후 세워진 바이마르 공화국이 통치했다. 바이마르 공화국은 이원집정부제와 민주주의에 기반한 공화국이었다. 하지만 바이마르 공화국은 성립 직후부터 수많은 위기에 직면하게 된다. 초인플레이션과 정치적 혼란, 정당들의 분열과 1차 세계대전의 승전국들과의 껄끄러운 관계 등이 끊임없이 바이마르 공화국의 목을 잡았던 것이다. 특히 1919년에 맺어진 베르사유 조약에서, 당시 패전국이었던 독일은 엄청난 양의 배상금을 물어주어야했는데, 바이마르 정부는 이 배상금을 주기 위하여 대규모로 돈을 찍어냈다. 정부가 지나치게 통화 유통량을 증가시키자 시중에서는 당연히 대규모 인플레이션이 일어났고, 이 때문에 모든 상품 가격들이 상승하고 독일은 경제적인 대혼란에 빠져들었다. 게다가 이러한 노력에도 불구하고 바이마르 공화국이 프랑스에게 배상금을 갚는 데에 실패하자, 프랑스 군대는 루르에 있는 독일의 공장 지대를 강제로 점령하였다. 이는 독일 대중들의 격분을 불러왔고, 바이마르 공화국은 갈수록 위태로워져만 갔다.<br><br>국가사회주의 독일 노동자당(나치당)은 1920년에 창당되었다. 1년 전에 창당되었던 독일 노동자당을 이어 만들어졌으며, 당시 독일에서 활하던 극우 정당 들 중 하나였다. 나치당 강령에는 바이마르 공화국의 해체, 베르사유 조약 파기, 급진적인 반유대주의, 반공주의 등이 들어있었다. 이들은 국민들에게 강력한 중앙 정부를 약속했으며, 주거 문제를 안정시키겠다고 공언했으며 인종에 기반한 정책을 펼칠 것이라고 말했다. 또한 막대한 부를 축적하고 있던 유대인들과 같은 인종들을 쓸어내어 세계의 '인종적 순수성'을 지켜낼 것이는 망언을 하는 등 과격하고 급진적인 정책들을 쏟아내었다. 나치당은 돌격대를 창설하여 자신들에게 반대하는 사람들에게 폭행을 가하거나 죽였고, 거리에서 유대인들을 폭행하거나 상대 정당의 모임들에 쳐들어가며 공포 분위기를 조장하였다.<br><br>1929년 10월 24일, 미국의 주식 시장이 대폭락하자 이는 미국의 달러에 깊이 의존...</code> | | <code>오바마의 방일 일정을 연장하는데 기여한 것으로 보이는 규칙은?</code> | <code>버락 오바마 미국 대통령이 오는 25일 한국을 방문해 박근혜 대통령과 정상회담을 한다. 3일 외교 소식통에 따르면 오바마 대통령은 23일 2박3일간 일본을 국빈 방문한 뒤 25일 1박2일 일정으로 방한한다. 일본과 한국 모두 1박2일을 계획했으나 일본 일정을 하루 늘렸다. 민경욱 청와대 대변인은 “오바마 대통령이 1박 일정으로 방한한다”며 “구체적인 날짜와 일정은 미국과 협의 중”이라고 확답을 피했다.미국은 일본의 끈질긴 요청으로 막판에 일정을 재조정한 것으로 알려졌다. 일본은 오바마 대통령의 방문이 1996년 빌 클린턴 전 대통령 이후 18년 만에 이뤄지는 미국 대통령의 국빈 방문이라는 점에서 큰 의미를 부여하고 있다. 한 외교 소식통은 “일본은 지난달 백악관이 아시아 순방 계획을 발표했을 때부터 방문 일정 연장을 집요하게 요구해왔다”며 “최근에는 정상회담 외에도 일왕 예방과 만찬 행사 등을 위해 최소 2박3일이 필요하다고 주장해 미국이 막판에 일정을 하루 앞당겼다”고 설명했다. 이 같은 배경에는 최근 일본의 대미 정책이 주효했다는 평가가 나온다. 일본은 최근 무기 관련 기술 수출을 금지하는 원칙을 47년 만에 폐지했다. 미국 정부는 이날 “일본의 정책 개정을 환영한다”며 “미국 및 다른 파트너 국가와의 방산 협력 기회를 넓혀주고 절차를 간소화해줄 것”이라는 입장을 발표했다. 일본이 환태평양경제동반자협정(TPP) 협상에서 미국의 요구 조건을 일부 수용하는 등 한발 물러설 것이란 전망도 나온다. 오바마 대통령은 방한기간 박 대통령과 정상회담을 하고 교육·문화 이벤트, 주한미군 관련 행사에 참석할 것으로 알려졌다. 2009년 취임 이후 네 번째 한국 방문이다. 북핵 문제와 군사안보 분야에서 한·미 동맹 등이 주요 의제로 논의될 것으로 예상된다.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `num_train_epochs`: 1 - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: round_robin </details> ### Training Logs | Epoch | Step | Training Loss | spearman_cosine | |:------:|:----:|:-------------:|:---------------:| | 0 | 0 | - | 0.8150 | | 0.4558 | 500 | 0.1625 | - | | 0.9116 | 1000 | 0.1118 | - | ### Framework Versions - Python: 3.12.3 - Sentence Transformers: 3.3.1 - Transformers: 4.47.1 - PyTorch: 2.5.1+cu124 - Accelerate: 1.2.1 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "shangrilar/klue-roberta-base-klue-sts", "library_name": "sentence-transformers", "metrics": ["pearson_cosine", "spearman_cosine"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:17552", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "김치의 숙성정도를 알아내는 센서가 있는 김치냉장고 이름은?", "sentences": ["알코올 의존증은 생리학적 원인과 심리학적 원인에 의해 심화된다. 체내의 알코올 대사 과정에서 발생하는 테트라하이드로 이소퀴놀란스는 모르핀과 비슷한 구조를 갖는 물질로 동물 실험에서 알코올 기호를 높이는 것이 확인됐다. 이 물질로 인해 알코올 의존증 환자는 술을 끊을 경우 오한, 구토, 무력감 등의 금단 증상을 겪는다. 또한 알코올 의존증은 심리적 원인에도 많은 영향을 받는데, 가족의 사랑을 받지 못해 의존적인 성향을 보이는 사람들이 보다 알코올 의존증에 취약하다. \n\n신체적 원인은, 1950년대 초반에 캘리포니아 공과대학 연구소의 연구 결과를 통한 인체의 뇌에 쾌락 중추 연구 결과 따르면, 신경해부학적로 볼때 알코올이 오피 오이드(아편이라는 물질 과 화학성분이 유사)라는 물질이 분비를 촉진하여, 알코올에 의해서 자극을 통해서, 지속적인 쾌감을 느끼려는 욕구에 의해서 술에 대한 중독이 심화 된다는 것이다. \n유전적으로 부모가 알코올 의존상태일때는 그 자녀는 4배 이상 알코올 의존증환자가되며, 심리적으로는 현실에 대한 불안이나 억압 또는 부정적인것을 잊어버리기 위한 보상을 받으려는 욕구로 알코올을 섭취한다. \n\n알코올 의존의 원인이 되는 알코올 남용과 알코올 의존의 경우에는 다음과 같이 나누어진다. \n*‘알코올남용’의 경우 알코올에 대한 의존이 심한것으로 반복적 음주로 직장 및 가정에서 역할을 다하지 못하거나, 신체적으로 건강등에 위험한 상황에서 반복적으로 음주한다.\n* ‘알코올의존’은 예전만큼 음주해서는 알코올의 효과가 감소되어 음주량이 더 많아 지는것 및 술을 줄이거나 끊었을 때 나타나는 불안, 불면, 설사, 환청, 환시, 간질발작과 같은 증상등이 나타나는데 이런 증상을 없애기 위해 술을 또 마시는것등이다.", "광주광역시 광산구의 삼성전자 냉장고 생산라인. 18일 찾은 이곳에선 요즘 김장철을 맞아 19초마다 1대씩, 매일 2000여대의 김치냉장고를 생산하고 있었다. 생산라인을 총괄하는 김광덕 상무는 “11월엔 하루 10시간씩 2교대로 일해도 소화하지 못할 만큼 물량이 몰리고 있다”고 말했다.삼성이 올해 김치냉장고 시장에 내놓은 승부수는 ‘메탈’이다. 김치를 장독에 담아 땅속에 묻는 것과 같은 수준으로 온도 변화를 최소화하기 위해 김치냉장고 칸칸을 알루미늄 스테인리스 등 금속으로 감쌌다. 제품명을 ‘메탈 그라운드’로 정한 것도 그래서다. 이를 통해 냉장고 문을 여닫을 때도 내부 온도 변화를 ±0.3도 이내로 유지할 수 있게 했다. 온도 편차가 0.2도만 커져도 김치가 15%나 더 숙성하기 때문이라고 삼성 측은 설명했다.김치냉장고 개발팀이 금속 재질을 적용하기로 결정한 것은 지난 1월이었다. 지난해 내놓은 제품은 김치통 밖에 캡슐을 하나 더 씌우는 방법으로 정온성을 유지했다. 하지만 소비자 입장에선 김치를 꺼내기 위해 캡슐에 이어 김치통을 다시 열어야 해 불편했다. 그래서 캡슐을 없애는 대신 냉기 전달력이 뛰어난 금속을 쓰기로 한 것이다.이때부터 김치와의 전쟁이 시작됐다. 처음 시도하는 것이다 보니 어떤 두께로 어떤 소재의 금속을 써야 김치를 익히는 데 가장 좋은지 알 수가 없었다. 또 다양해진 소비자 입맛에 맞추기 위해서는 배추김치 외에도 묵은지, 깍두기, 저염 김치 등 다양한 김치에 맞는 숙성기술을 확보해야 했다. 결국 먹어 보는 수밖에 없었다. 개발팀은 금속판 두께, 온도 등 다양한 변수를 설정하고 직접 김치를 익힌 뒤 시식했다. 이날 방문한 연구실에도 수십대의 냉장고에 갖가지 김치들이 보관돼 있었다. 이명주 책임연구원은 “김치를 먹고 빵 같은 것을 씹어 입을 씻어낸 뒤 다시 김치를 먹기를 몇 달이나 반복했다”며 “실험용으로 쓴 김치가 5t이 넘는다”고 말했다. 결국 여섯 종류의 김치를 익히는 최적의 기술을 개발해 제품에 담았다.개발팀은 여기에 그치지 않고 보관통마다 담겨 있는 김치가 어느 정도 익었는지 볼 수 있는 센서도 달기로 했다. 설익은 김치를 좋아하는 사람도 있고 조금 쉰 듯한 맛을 즐기는 사람도 있기 때문이다. 팀은 연구를 통해 김치가 익을수록 초산이라는 성분을 많이 뿜어낸다는 것을 알아냈다. ‘메탈 그라운드’에 포함된 김치통에는 이 센서가 각각 달려 있다.김치냉장고는 국내에서만 팔리는 제품이지만 시장 규모는 작지 않다. 매년 1조원어치의 김치냉장고가 팔리고 있다는 게 업계 분석이다. 특히 올해는 풍년으로 배추 가격이 크게 떨어진 데다 교체 수요가 몰리면서 예년보다 시장 규모가 10% 이상 커질 것으로 전망하고 있다. 삼성을 비롯한 각 사가 치열한 김치냉장고 신제품 경쟁을 벌이는 이유다.", "그러나 한편으로는 미국 사회의 인종차별주의적, 흑인을 차별하는 태도를 목격하면서 백인들의 오만함에 대한 부정적인 생각을 갖게 되었다. 미국 체류 중 황인종을 멸시하는 백인 불량배들에게 끌려가 가끔 얻어맞기도 하고, “유색 인종”이라는 이유로 호텔 투숙을 거절당해 정거장에서 밤을 지샜는가 하면, 세례 교인이었던 그와 가장 가까워야 할 미국인 선교사에게마저 늘 은근히 - 그리고 가끔은 매우 노골적으로 - “왕따”당하는 처지였다 귀국 이후에 조선에서 만난 미국인 선교사들도 비슷하게 그를 대우했다.\n\n당시의 그에 대해 후일 노르웨이 오슬로 대학 교수 박노자는 '백인 인종주의에 상처받아 만신창이가 되었을 그의 마음 상태 '를 지적하기도 했다.\n\n만약 내가 마음대로 내 고국을 선택할 수 있다면, 나는 일본을 선택할 것이다. 오, 축복받은 일본이여! 동방의 낙원이여! |1893년 11월 1일자 일기 \n\n그는 평소 조선인들의 불결한 위생과 겉치레, 감정적 대응 등을 내심 경멸해왔다. 그리고 서구의 기독교사상과 일한 만큼만 대가를 받는다는 청교도 정신, 합리주의의 수용을 통해 이를 개선하려 했다. 그러나 평소 기독교선교사들로부터도 자기 일 처리도 못하는 작은 아이, 원주민, 예의를 지키지 않아도 될 만한 사람의 대접을 늘 받아 온 윤치호는, 인종주의야말로 미국의 가장 중요한 특징이라는 결론을 내리게 되었다 그 뒤 그는 조선에도 민주주의와 자유주의, 기독교 정신의 도입이 시급하다고 봤다. 그러나 미국의 민주주의와 자유주의, 기독교 사상과 개척정신은 높이 평가하면서도 내심 백인종을 혐오하는 이중적인 태도와 인종주의적인 사고를 갖게 되었다.\n\n어느 교회에서 남부 출신 남감리교회 목사들이 예배 시간에 흑인을 박멸해야 된다는 설교, 흑인들을 아프리카로 추방해야 된다는 설교를 듣고는 충격을 받기도 했다 그들이 목사인가 기독교인인가 자체를 의심하기도 했다. 하지만 흑인이 백인들로부터 차별대우를 받는 것에는 분노하면서도 흑인에 대해서는 1893년 2월 17일자 일기에 '(아프리카인들이 미국에 끌려와) 영어를 배운 것만으로도 그들의 노예생활에 대해 충분히 보상받은 것이다. 라는 이중적인 시각을 갖게 된다. 귀국 이후 그는 국내 인사들이 미국을 무조건적으로 의존하거나 일제와는 다른 선량한 국가일 것이라는 생각을 비판, 경계하기도 했다."]}, {"source_sentence": "다음 해 우리나라 국채의 규모는 얼마인가?", "sentences": ["국회 예산결산특별위원회는 오는 30일 ‘2015년도 예산안 및 기금운용계획안에 대한 공청회’를 열고 각계 전문가로부터 내년도 예산안에 대한 의견을 듣는다. 이어 11월6일 전체회의를 시작으로 본격적인 예산 심의에 들어간다. 7~8일에는 정홍원 국무총리와 최경환 부총리 겸 기획재정부 장관이 출석한 가운데 예산안에 대한 정부 측 제안 설명과 예결위 전문위원의 검토 보고를 듣고 종합 정책질의를 할 예정이다.예결특위는 10~13일 부별 심사를 거쳐 16일부터 예산안심사 소위원회를 가동하고 30일 전체회의에서 예산안을 의결해 본회의로 넘긴다는 계획이다. 올해부터는 개정 국회법(일명 국회선진화법)에 따라 11월 안에 내년도 예산안 심의를 마치지 못하면 12월1일 본회의에 정부 원안이 자동 부의(토의에 부침)된다. 여야 모두 헌법상 예산안 처리 시한인 12월2일을 준수하겠다는 방침이지만 현행법상 여야 원내대표 합의로 이를 연기할 수 있도록 돼 있어 올해도 예산안 처리 시한을 넘길 가능성을 배제할 수 없다.정부가 국회에 제출한 내년도 예산안 규모는 총 376조원으로 올해에 비해 5.7%(20조2000억원) 늘었다. 국민연금 등을 제외한 관리재정수지 적자가 올해 25조5000억원에서 내년에는 33조6000억원으로 확대될 전망이다. 내년 국가채무도 국내총생산(GDP)의 35.7%인 570조1000억원으로 급증해 사상 최고치를 기록하게 된다.정부·여당은 이에 대해 활력을 잃어가고 있는 한국 경제를 되살리기 위한 불가피한 조치라고 보고 있으나 야당 측은 무리한 재정 운용으로 미래 세대에 부담을 떠넘겨서는 안 된다고 지적한다. 국회 기획재정위원회 소속 한 새정치민주연합 의원은 “이렇게 무리한 적자재정을 편성한 것은 아무리 좋게 봐주려고 해도 내후년 총선을 겨냥해 경기를 부양하려는 의도라는 의심을 지울 수 없다”며 “정말 한국 경제의 구조적 문제를 해결하려고 한다면 일시적 처방이 아닌 잠재성장률 자체를 높이기 위한 정책을 내놨어야 한다”고 말했다.예산안은 물론 법안 심사 과정에서도 여야 간 한판 승부가 불가피하다. 대표적 쟁점 법안인 세월호 특별법, 정부조직법, ‘유병언법’ 등 ‘세월호 3법’과 여권이 사활을 걸고 추진 중인 경제활성화 관련 법, 정부가 연내 입법에 드라이브를 걸고 있는 공무원연금 개혁안 등을 놓고 여야가 첨예하게 대립하고 있기 때문이다. 특히 정부·여당이 최우선 입법 과제로 꼽고 있는 서비스산업발전기본법, 의료법 개정안, 관광진흥법, 자본시장법, 크루즈법, 마리나 항만법, 소규모 주택임대 수입에 대해 소득세를 낮추는 소득세법 개정안, 월세 임차인에 대한 세제지원 확대 등을 담은 조세특례제한법 개정안, 분양가 상한제를 탄력 적용하는 주택법 개정안, 재건축 초과이익 환수폐지 법안 등 30여개에 달하는 경제활성화 법이 국회에 묶여 있는 상태다.김현숙 새누리당 원내대변인은 “안타깝게도 새정치연합은 민생과 복지를 얘기하면서 그 핵심인 경제활성화는 외면하고 있다”며 “질타만 하지 말고 관련 법을 꼼꼼히 살펴보고 반드시 함께 처리해 대한민국이 저성장의 굴레에서 벗어나는 기회로 삼아야 한다”고 주장했다.", "당간지주는 사찰 입구에 설치해 두는 것으로, 절에 행사나 의식이 있을 때 이곳에 당이라는 깃발을 걸어둔다. 이 깃발을 매달아두는 길쭉한 장대를 당간이라 하며, 이를 양쪽에서 지탱해 주는 두 돌기둥을 당간지주라 한다. 드물게 당간이 있으나 대부분은 두 기둥만 남아 있다.\n\n이 당간지주는 만덕동에서 제1 만덕터널로 들어가는 진입로 아래의 만덕사터에 서 있다. 두 기둥 중에서 한쪽 기둥만 남아 있으며, 기둥 옆으로는 당간의 받침돌로 보이는 작은 돌이 놓여 있다. 바깥면은 가운데에 도드라진 선을 한 줄 새겨놓았고, 안쪽면 꼭대기에는 네모난 홈을 파서 당간을 고정시키도록 하였다. 맨 위는 안쪽에서 바깥쪽으로 2단의 굴곡을 주면서 둥글게 깎아두었다.\n\n각 부분의 돌을 다듬는 솜씨가 인근의 범어사 당간지주와 비교하여 매우 세련되어 있다. 비록 한쪽 기둥만 남아 있으나, 땅위에 드러난 높이가 길어 그 사이에 당간이 놓였다면 더욱 더 웅장했을 것으로 보인다. 만덕사가 창건된 시기와 때를 같이하여 고려 전기에 만든 것으로 보이며, 만덕사 옛터의 경계를 정하는데 있어 좋은 자료가 되고 있다.", "전년도에 2년 연속 일본 시리즈 정상에 오른 세이부 라이온스의 모리 마사아키 감독이 퍼시픽 리그 올스타팀을 지휘했고 센트럴 리그 우승을 이끈 요미우리 자이언츠의 오 사다하루 감독이 센트럴 리그 올스타팀의 지휘를 맡았다. 1987년부터 MVP 수상자에게 상금과 상품을 증정해 온 산요 전기가 이 해부터 특별 협찬 기업이 되면서 대회명도 ‘산요 올스타 게임’(サンヨーオールスターゲーム)이라는 명칭이 변경됐다(2006년까지).\n\n지붕을 씌우고 좌우 양쪽의 100m 넓이를 가지는 도쿄 돔이 개장하면서 ‘야구의 질이 달라졌다’라고 말할 정도였고, 시즌 종료와 함께 긴 역사를 가졌던 한큐 브레이브스와 난카이 호크스가 매각되면서 프로 야구도 새로운 시대를 맞이하려 하고 있었다.\n\n7월 23일로 치를 예정이었던 1차전이 우천으로 인해서 24일로 연기됐다. 1차전에서는 결과적으로 올스타전 최종 개최가 되는 한큐 니시노미야 구장에서 개최돼 니시노미야 구장의 홈팀인 한큐의 호시노 노부유키가 퍼시픽 올스타팀의 선발 투수로서 등판하여 책임 투구 이닝을 무실점으로 틀어 막았고 한큐의 주력 타자인 부머 웰스가 대형 홈런을 날리는 등 역사 깊은 구단의 화려한 해체와 같은 전개가 이뤄졌다.\n\n2차전에서는 나고야 구장에서 개최됐는데 오 사다하루를 동경하여 프로에 입문한 고졸 신인 다쓰나미 가즈요시가 선발 출전했고, 당시에는 지명타자가 올스타전에 도입돼 있지 않아서 가도타 히로미쓰가 3번 타자 겸 우익수로서 선발 출전했다. 6회에 오카다 아키노부(한신)의 역전 홈런으로 센트럴 올스타팀은 1승 1패를 기록했고 오카다는 2차전 MVP에 선정됐다. 그리고 최종전인 3차전에서는 개장한 지 4개월 된 도쿄 돔에서 처음으로 올스타전이 개최됐는데 당시 도쿄 돔을 홈구장으로 사용하고 있던 닛폰햄 파이터스의 에이스 니시자키 유키히로가 퍼시픽 올스타팀의 선발 투수로서 3이닝을 퍼펙트(2회말에는 클린업에게서 3자 연속 삼진, 3회말 2사에 대타 다쓰나미로부터 5개째의 삼진) 기록을 세웠다. 1986년 이후 2년 만에 연장전으로 들어가면서 무승부 직전의 12회말, 야수를 전부 기용한 센트럴 올스타팀은 투수 미즈노 가쓰히토를 대타로 보냈는데 이것이 끝내기 희생 플라이가 되면서 센트럴 올스타팀이 쇼와 시대의 마지막 올스타전을 승리로 매듭지었다.\n\n3차전이 열린 도쿄 돔에서는 1988년도 야구 명예의 전당 시상식이 거행됐는데 센트럴 올스타팀의 감독이자 요미우리를 지휘하고 있는 오 사다하루의 후임으로 유력하게 거론돼던 나가시마 시게오가 그해 야구 명예의 전당의 헌액자로서 등장했다(그 후 히로오카 다쓰로를 임명하려 했으나 고사했고 후지타 모토시가 재부임). 사다하루·나가시마의 ‘ON 콤비’가 올스타전 무대에서 동시에 등장한 것은 1979년(나가시마는 감독, 사다하루는 현역 선수) 이후의 일이었고 이들 두 사람이 양대 리그의 감독으로서 올스타전에 나란히 등장한 것은 그로부터 13년 후인 2001년이다.\n\n그 해에는 워렌 크로마티의 대역으로 요미우리에서 대활약한 뤼밍츠가 감독 추천으로 올스타전에 첫 출전하여 전 경기에 선발로 출전했지만 무안타에 그쳤다."]}, {"source_sentence": "정치하는 엄마들이 피해자들이 정상적인 일상생활을 할 수 있도록 돕기 위해 진행했던 서비스는?", "sentences": ["스쿨미투는 학교에서 일어난 아동 성폭력을 아동청소년들 스스로 고발하며 공론화의 주체가 된 인권운동이다. 2018년 4월 용화여고 졸업생들이 재학 중에 겪었던 학교 성폭력을 공론화하자, 재학생들이 교실 창문에 포스트잇으로 #METOO #WITHYOU 모양으로 붙여 화답하며 한국의 고질적인 학교 성폭력 실태 고발이 들불처럼 이어졌다. 오랫동안 교사들이 저질러온 성폭행, 성추행, 성희롱, 성차별 발언들이 만연한 교육현장의 추악한 민낯은 한국 사회에 큰 충격을 주었다. 2018년 10월 국정감사에서 ‘스쿨미투’가 언급될 만큼 피해 학생들의 제보와 폭로가 이어져 사태의 심각성이 입증되었지만, 처벌은 미온적이고 가해교사들은 교단으로 부메랑처럼 복귀하는 등 교육 현장은 개선되지 않았다. 더 이상의 피해를 막기 위해 용기 냈던 학생들은 주동자 색출 위협과 소외와 조롱을 견디며 보호받지 못했다. 학교 및 교육 당국의 더디고 미흡한 조치로 공론화 당사자들은 일상으로 돌아오지 못하고 힘겹게 학업을 이어가거나 학교를 떠나기도 했다. 한국사회는 마땅히 안전해야 할 교육 현장에서 아동학대• 아동인권 침해 사안이 일어나도 문제 해결의 결정적 주체인 학교가 은폐• 축소하면 전혀 알 수 없는 끔찍한 현실을 마주하게 되었다.\n\n\n이에 정치하는엄마들은 피해 당사자들이 하루빨리 일상으로 복귀하길 바라는 마음으로 무료 법률지원을 시작했다. 2018년 2월부터 스쿨미투를 공론화한 전국 79개교 중 연락 가능한 49개교 트위터 계정들에 안내를 시작으로 피해 당사자들이 학교, 교육청, 경찰 등과 겪고 있는 갈등 조정을 이어가고 있다. 더불어 대한민국의 교육현장에서 오랫동안 자행되어 왔던 아동 인권 침해의 벽을 뚫고 나온, 아동 청소년들의 절박하고 용기있는 목소리들을 바탕으로 스쿨미투 전국지도를 제작하여 온라인 상에 공개하고 있다.\n\n\n피해자 회복과 가해 재발 방지를 위해 투명한 행정처리와 정보공개는 필수이다. 이에 2019년 3월 정치하는엄마들은 제주를 제외한 전국 16개 시도교육청에 스쿨미투 현황 파악을 위한 정보공개청구를 하였다. 그러나 대부분의 시도교육청으로부터 비공개 답변을 받았다. 정보공개 청구 당시 23개교로 학생들이 가장 많이 고발한 지역 관할인 서울시교육청의 불투명한 정보 비공개, 부존재로 답변하는 등 소극행정에 대하여 정치하는엄마들은 서울시교육청 상대로 2019년 5월 스승의 날을 맞아 정보공개거부 철회 행정소송 제기했다. 2020년 3월 5일 서울행정법원은 교사 실명 등 개인정보를 가리고 스쿨미투 처리결과를 공개하라는 일부승소 판결을 내렸다. 그러나 서울시교육청은 정보가 공개될 경우 배제징계를 받은 교사가 교단에 설 수 있을지 걱정된다며, 교사의 사생활 보호를 이유로 2020년 3월 20일 항소하여 올바른 교육문화 정책의 방향과 아동인권을 후퇴시킨다는 비판을 받고 있다. \n\n\n스쿨미투는2019년 3월 한국을 찾은 트위터 CEO 잭 도시가 불의에 맞서는 한국 청소년들이 공론의 장으로 트위터를 선택해준 걸 자랑스럽게 여기며 높이 평가할 정도로 2018년 한국 인권운동의 한 획을 그은 역사적인 일이다. 동시에 2019년 유엔 아동인권위원회 본회의 안건으로 채택될 만큼 국제사회가 한국에게 그 후속대책과 국가적 책임을 묻고 있는 중대한 사안이다. 학교 성폭력 해결은 현재를 살아가고 있는 아동뿐만 아니라 오랫동안 위계에 갇혀 부당한 권력에 침묵했던 시민들 스스로가 인권침해 상황을 견뎌왔던 존재였음을 인정하는 것부터 그 첫걸음이 될 것이다.\n\n정치하는엄마들은 아이들과 함께 살아가는 사람들로서, 교육 현장에 만연해 있는 성폭력 해결을 통해 그 누구도 차별 받지 않고 존재 그대로 존중받는 세상을 만들기 위해 노력하고 있다.", "그녀는 경기도 용인의 한 이교도 집안에서 태어났다. 그녀는 경기도 봉천리에 있는 한 이교도 이씨 집안으로 시집왔고, 중년에 시누이 이매임 테레사의 권유로 천주교에 입교했다. 그녀는 그녀의 남편이 개종했으면 하는 바람은 이루지 못했지만, 자식들이 개종하는데에는 일조하였다. 후일에 그녀의 딸들 중 이영희 막달레나는 1839년 7월 20일에 순교했고, 같은 해 9월 3일에는 이정희 바르바라도 순교하였다.\n\n1839년 3월에 그녀는 성사를 받으러 한양으로 올라왔다. 그녀는 시누이 이매임과 두 딸 이영희와 이정희가 사는 집에 머물던 중 교우 남명혁 다미아노와 이광헌 아우구스티노의 용감한 순교 소식을 계기로, 두 딸과 시누이, 동료 김성임 마르타 그리고 김 루치아 등과 함께 자수를 결심하고 1839년 4월 11일에 남명혁 다미아노의 집을 경계하던 포졸들에게 묵주를 보이며 천주교 신자임을 알리고 자수하였다.\n\n허계임의 옥중 생활에 대해서는 많은 기록이 남아 있지 않은데, 그녀는 격심한 고문을 받았지만, 용기와 인내로 온갖 고통을 견뎌냈다는 것이 그것에 대해 알려진 대부분이다. 그녀의 용기는 그녀의 두 딸 이영희와 이정희의 용기만큼 위대했다. 그 두 순교자의 어머니 또한 순교자가 된 것이다. 그 영광스러운 순교자 가족은 하느님의 은총이다.\n\n허계임은 1839년 9월 26일에 서소문 바깥에서 여덟 명의 교우와 함께 참수되었다. 그렇게 그녀는 67세의 나이로 순교하였다.", "대림성모병원(병원장 김성원)이 ‘코로나19로 인한 5대 암(위암‧대장암‧자궁경부암‧간암‧유방암) 진단 지연 실태’에 대한 분석 결과를 발표했다. 암은 발견 시기에 따라 생존율이 크게 달라지기 때문에 조기 발견이 매우 중요하다. 대림성모병원은 코로나19로 인해 낮아진 5대 암 진단 실태를 알리고 빠른 검진의 중요성을 강조하고자 이번 조사를 실시했다. ■ 올해 3~5월 5대 암 진단 건수, 전년 동기 대비 5000여건 감소 대림성모병원이 국민건강보험공단에 등록된 위암, 대장암, 자궁경부암, 간암, 유방암 산정특례 건수를 분석한 결과, 2020년 3~5월 건수가 전년 동기간 대비 5,146건 감소한 것으로 나타났으며, 이는 2019년에 비해 21.4% 감소한 수치이다. 해당 건수에는 산정특례 종료 시점에서 재발 또는 전이로 기간을 연장하는 경우는 제외했기 때문에 해당 암을 처음 진단받은 환자 수가 전년 대비 크게 낮아진 것을 알 수 있다. 위암 등록건수는 2019년 6,823건에서 2020년 4,735건으로 2,088건 감소해 30.6%가 낮아졌다. 대장암은 6,765건에서 5,325건으로 1,440건(21.3% 감소), 자궁경부암은 846건에서 632건으로 214건(25.3% 감소), 간암은 3,800건에서 3,276건으로 524건(13.8% 감소), 유방암은 5,789건에서 4,909건으로 880건(15.2% 감소)이 줄어든 것으로 나타났다. ■ 코로나19로 5대 암 수검율 큰 폭 하락 이처럼 코로나19가 발병한 이후 3개월간 2020년 3~5월 5대 암 환자 수는 전년 대비해 5,000여 명이 감소한 것으로 나타났다. 이는 수많은 암 환자가 진단되지 않고 있다는 것을 의미한다. 암 발견 시기는 생존율에 큰 영향을 미치기 때문에 조기검진을 통한 빠른 치료는 무엇보다 중요하다. 대림성모병원은 진단 건수뿐만 아니라 실제 검진율이 얼마나 감소했는지 알아보기 위해 본원의 5대 암 검진 건수를 분석했다. 그 결과 올해 2~5월에 5대 암 검진 건수가 작년 동기간 대비해 52.8%가 감소한 것으로 나타났다. 대림성모병원 김성원 병원장은 “코로나19로 인한 검진율 감소는 본원뿐만 아니라 전국의 모든 병원이 체감하고 있을 것이다.”라며 “코로나19 2차 대유행 및 장기화를 대비하기 위해 사회적 거리두기를 통한 생활 방역이 매우 중요한 시기이다. 하지만 암 검진을 늦추는 것은 진단 지연으로 암이 전이되어 심각한 결과를 초래할 수 있기 때문에 검진 대상자는 제때 검진을 받고 암 증상이 있다면 조금이라도 빨리 병원을 방문하기를 바란다.”라고 말했다."]}, {"source_sentence": "전자태그 갱신제와 함께 도입되는 제도는?", "sentences": ["왓챠는 ‘제1회 왓챠 시리즈 각본 공모전’ 결과 대상을 수상한 전설 작가의 <공단>을 포함해 총 5편의 수상작을 선정했다고 29일 발표했다. <공단>은 개성공단에서 일하던 북한 여인과 남한 여인이 서로와의 교감을 통해 성장하고 연대하며 희망과 사랑을 찾게 되는 이야기다. 대상에 선정된 전설작가에게는 상금 2천만원이 수여된다. 우수상에는 전효정 작가의 <웬수의 네 번째 웨딩>과 김도연 작가의 <다시 좀비가 될 순 없어>, 두 편이 선정됐다. 우수상 수상 작가에게는 1천만원의 상금이 수여된다. <웬수의 네 번째 웨딩>은 웨딩플래너인 주인공이 짝사랑하는 남자의 새아빠가 될 자신의 철없는 친아빠의 네 번째 결혼식 웨딩플래닝을 맡게 되면서 미처 몰랐던 진정한 사랑을 찾아가는 성장 스토리다. <다시 좀비가 될 순 없어>는 좀비에게 물리는 것보다 좀비 확진자였다는 사회적 낙인이 더 무서운 세상에서 좀비 확진자였던 주인공이 회사라는 공간에서 생존해가는 이야기다. 장려상에는 조유진 작가의 <완벽한 하루>와 안세화 작가의 <손을 잡아줘>가 선정됐다. 두 작가에게는 상금 5백만원이 수여된다. 총 상금 5천만원 규모로 올해 처음으로 진행된 ‘왓챠 시리즈 각본 공모전’에는 600편 넘는 작품이 응모됐다.", "화장품을 팔기 위해선 정신감정을 받아야 한다. 메이크업 전문업체를 내려면 미용사 자격증이 있어야 한다. 일회용 이쑤시개도 개별 포장지에 제조연월을 모두 표시해야 한다…. 덩어리 규제만 경제의 발목을 잡는 것이 아니다. 은퇴 후 창업 등 자영업 전선에 나선 사람들은 전혀 예기치 않은 규제와 맞닥뜨려 곤란을 겪는 경우가 많다. 25일 한국경제신문이 자영업 규제 실태를 파악한 결과 소상공인의 창업을 가로막는 안전·위생 규제는 수십 가지에 달했다. 대표적인 게 화장품 제조·유통업체를 운영하려면 정신감정을 받아야 한다는 화장품법 조항이다. 식품의약품안전처는 소비자 안전을 위해 필요한 조항이라고 설명하지만 이것이 과연 필요한 규제인지는 논란이 분분하다. 업계의 한 관계자는 “제품 안전을 위해 하필이면 정신감정을 의무화해야 할 이유가 뭔지 모르겠다”며 “정말 황당해 하는 사람이 많다”고 말했다. A5면에 계속 고은이 기자 [email protected] 한경 기업신문고 무역협회·전경련·중기중앙회와 함께 운영합니다", "서울시가 승용차요일제 가입 차량에 제공하던 자동차세 5% 감면 혜택을 내년부터 폐지한다. 2003년 승용차요일제가 도입된 지 11년 만에 가입 혜택을 축소하는 것이다. ▶본지 2월10일자 A31면 참조서울시는 “승용차요일제의 실효성을 높이기 위해 전문가 간담회와 시민공청회 등을 거쳐 이 같은 방안을 확정했다”고 27일 발표했다. 승용차요일제는 월~금요일 5일 중 시민 스스로 쉬는 날을 정하고 승용차요일제 전자태그를 차량에 부착한 뒤 해당 요일에 차량을 운행하지 않으면 혜택을 주는 제도다. 지난달 말 기준으로 서울 등록차량 237만대 중 33%인 79만대가 가입돼 있다.서울시는 그동안 승용차요일제 가입 차량에 대해 남산 혼잡통행료 50%, 공영주차장 요금 30% 할인과 자동차세 5% 감면 등 다양한 인센티브를 제공해왔다. 그러나 일부 가입자가 혜택만 받고 전자태그를 떼거나 운휴일을 위반하는 사례가 잇따라 제도 정비의 필요성이 제기됐다. 서울시는 시내 19곳에 전자태그 리더기를 설치해 요일제 위반 차량을 적발했지만 이들 지점을 제외하면 사실상 단속이 어렵다. 서울시는 한 해 3회 이상 운휴일을 위반한 차량 5만여대를 적발했지만 실제 위반 차량은 수십만대에 달할 것으로 보고 있다.이와 함께 전자태그 갱신제도 오는 9월부터 시행한다. 전자태그 발급일로부터 만 5년이 지난 뒤 90일 이내에 태그를 재발급받지 않으면 자동 탈퇴 처리한다. 다만 도입 초기 혼란 방지를 위해 2006~2010년에 발급된 전자태그(약 60만장)에 한해 2016년 3월30일까지 재발급받는 것을 허용한다.대신 서울시는 자기 승용차를 평상시보다 덜 몰아 주행거리를 줄이면 인센티브를 지급하는 ‘드라이빙 마일리지제’를 도입, 9월부터 시범운영에 들어갈 방침이다. 서울시는 보험사와 협의해 조만간 구체적인 인센티브 방안을 마련할 계획이다."]}, {"source_sentence": "JD월드와이드보다 먼저 해외 직구 사이트를 만든 회사는?", "sentences": ["중국의 1, 2위 전자상거래업체 알리바바와 JD닷컴이 급성장하는 해외 직접구매(직구) 시장을 놓고 맞붙었다.16일 사우스차이나모닝포스트(SCMP)에 따르면 JD닷컴은 지난 15일 해외 직구 전용 사이트 JD월드와이드(www.jd.hk)를 개설해 본격적인 영업에 들어갔다. 중국 소비자는 이 사이트에서 중국에 직접 진출하지 않은 기업의 제품도 온라인으로 구매할 수 있다. 현재 버버리 랄프로렌 리바이스 보스 에코 등 패션 브랜드와 시세이도 랑콤 등 화장품 브랜드가 제품을 판매하고 있다. JD닷컴 관계자는 “미국 호주 프랑스 일본 한국 등의 다수 기업이 참여 의사를 밝혔다”며 “향후 세계 1200개 브랜드 제품을 판매할 것”이라고 설명했다. 앞서 알리바바의 B2C 전자상거래 사이트인 티몰은 작년 2월 해외직구 전문 사이트 티몰글로벌(www.tmall.hk)을 열었다. 이곳에는 한국의 이마트 롯데마트 LG생활건강 등도 입점해 있다.중국 전자상거래연구센터에 따르면 중국의 해외 직구 시장은 2013년 13조원이던 것이 작년에는 27조원으로 커졌으며, 2018년에는 418조원 규모로 급팽창할 것으로 예상되고 있다.", "달라스튜디오가 신규 웹예능 ‘배달그라운드’로 뜨거운 인기를 이어간다. 글로벌 미디어 기업 에이앤이 코리아(A+E Korea)의 유튜브 채널 ‘달라스튜디오’가 2020년 돌풍을 일으킨 ‘네고왕’과 ‘발명왕’의 인기를 이어갈 신작 웹예능 ‘배달그라운드’를 오늘(12일) 저녁 6시 30분에 첫 공개한다. ‘배달그라운드’는 찾아가는 본격 전국 배달맛집 리뷰 웹예능으로, 언택트 시대에 걸맞는 먹방 리뷰 콘텐츠로 다시 한번 에이앤이 디지털 스튜디오만의 ‘남다른’ 행보를 이어갈 예정. ‘배달그라운드’는 전국 방방곡곡 숨은 배달맛집들을 발굴하고, 직접 배달 가능 지역으로 찾아가 ‘어디서든’ 100% 솔직한 먹방 리뷰를 전한다. 특히 길바닥부터 모텔 바닥까지 배달 가능 지역에 어디서든 안전하게 먹을 수 있는 곳을 찾아 무작정 펼치는 이색 먹방을 펼칠 예정. 언택트 시대로 각광받고 있는 ‘배달 맛집 투어’라는 신선한 컨셉과 SNL에서 싱크로율 100%를 자랑하는 이광수 얼굴묘사로 ‘보급형 이광수’ 라는 타이틀을 얻은 배우 겸 개그맨 이명훈이라는 신선한 캐스팅으로 더욱 기대를 북돋고 있다. 12일(목) 공개될 ‘배달그라운드’ 1화에서는 정제되지 않은 예능원석 이명훈이 거침없는 배달 맛집 투어의 첫 시작을 알린다. “얼굴은 이광수, 이름은 임영웅을 닮았다”는 닮은 꼴 부자 이명훈의 자유분방한 동네투어부터 제작진과의 남다른 티격태격 케미 폭발로 첫 화부터 새로운 웃음 폭탄의 등장을 알릴 예정. “‘동네왕’으로 이름 변경하면 안 돼요?”라며 왕이 되고 싶다는 야심을 드러낸 이명훈과 콘텐츠 재미를 위한 제작진들의 고군분투 케미가 ‘배달그라운드’ 유행을 예고한다. 에이앤이 코리아 디지털 스튜디오에서 선보인 유튜브 채널 ‘달라스튜디오’는 ‘네고왕’, ‘발명왕’ 등 메가 히트 웹예능을 잇달아 선보이고 있다. 런칭 3개월만에 구독자 100만 명 돌파하는 등 독보적인 화제성과 파급력을 입증하고 있는 달라스튜디오는 '배달그라운드'를 공개해 흥행 돌풍을 이어갈 계획이다. 찾아가는 배달 맛집 전국 투어가 시작되는 '배달그라운드' 1화는 12일(목) 저녁 6시 30분 달라스튜디오에서 공개된다. ‘배달그라운드’는 매주 목요일 저녁 6시 30분 유튜브 채널 달라스튜디오에서 공개될 예정이다.", "다우(NYSE: DOW)가 지속가능한 패키징의 상용화에 앞장서고자 세계적인 플라스틱 필름 포장재 제조사인 통 구안(Thong Guan)과 협력해 지속 가능 자원 기반의 폴리에틸렌 수지로 만든 식품용 랩 ‘나노 바이오(Nano Bio)’를 아시아태평양 지역에 출시한다고28일 밝혔다. 통 구안이 출시한 ‘나노 바이오’는 다우의 저밀도 폴리에틸렌(LDPE) 수지인 ELITE™ 5230GC R Enhanced Polyethylene Resin를 기반으로 제작된다. 이 수지는 종이 생산 과정에서 발생하는 톨유(Tall oil)를 활용한 친환경 포장재로, 석유화학 원료를 기반으로 한 수지의 필름 생산 과정과 비교했을 때 탁월한 성능을 유지하면서도 현저히 낮은 양의 이산화탄소를 배출하는 것이 특징이다. 아울러 다우의 친환경 수지는 다른 지속 가능 자원 기반 제품과 달리 생태계 먹이사슬에 영향을 끼치지 않는 것이 특징이다. 지속가능성을 고려해 종이 생산시의 부산물을 사용하여 추가적인 토지 개발도 필요로 하지 않는다. 또한 통 구안의 랩 필름은 첨단 나노기술을 적용한 다운게이징(downgauging) 필름으로 기존 랩과 동일한 강도, 유연성, 내구성을 갖추고 있어 우수한 성능과 지속가능성을 동시에 추구할 수 있게 해준다. 또한 다우는 매스 밸런스 접근법(mass balance approach)을 적용해 기존의 석유 기반 원료를 이용한 생산시설에서 바이오 기반 원료를 사용할 수 있게 함으로써 별도 생산시설 구축에 따른 환경 비용을 줄이고, 생산자들로 하여금 더 많은 바이오 기반 원료를 이용할 수 있도록 돕는다. 다우 아시아태평양 지역 패키징 및 특수 플라스틱 사업부 총괄 부사장인 밤방 캔드라(Bambang Candra)는 “최근 새롭게 발표한 지속가능성 목표를 달성하기 위해 다우는 더 적은 자원으로 제품을 생산할 수 있는 기술을 개발, 고객들로 하여금 탄소 배출량을 감소할 수 있도록 지원하고 있다” 며 “다우 아태지역 최초의 바이오 기반 폴리에틸렌 출시를 시작으로 지속가능한 포장재 상용화와 플라스틱의 순환경제를 이뤄 나갈 수 있기를 기대한다”고 말했다. 통 구안의 다토 앙 푼 추안(Dato' Ang Poon Chuan) 매니징 디렉터는 “다우와의 협력은 지속가능한 환경과 자원 보존을 위한 다양한 노력의 일환”이라며 “다우의 친환경 수지를 기반으로 제작된 통 구안의 ‘나노 바이오’는 플라스틱 사용량은 줄이면서 우수한 성능은 그대로 유지해 고객사들의 만족도를 충족시켜줄 수 있는 뛰어난 제품”이라고 밝혔다. 한편, 종이 생산 과정에서 발생하는 톨유로부터 추출한 바이오 나프타(bio-naphtha)를 이용해 생산되는 다우의 저밀도 폴리에틸렌 수지는 국제 친환경 바이오 연료 인증(ISCC, International Sustainability & Carbon Certificate)을 획득한 바 있다. 이러한 생산 방식은 복잡한 제조 혹은 생산 시스템 내에서 지속가능한 원료의 공급을 원활히 함으로써 업계의 지속가능성을 향상시키고, 모든 단계의 추적가능성 기준(traceability criteria)을 충족시킨다."]}], "model-index": [{"name": "SentenceTransformer based on shangrilar/klue-roberta-base-klue-sts", "results": [{"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "Unknown", "type": "unknown"}, "metrics": [{"type": "pearson_cosine", "value": 0.8053055366603666, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.8150473226218842, "name": "Spearman Cosine"}]}]}]}
task
[ "TEXT_CLASSIFICATION", "SEMANTIC_SIMILARITY" ]
46,023
joeranbosma/dragon-longformer-large-mixed-domain
joeranbosma
fill-mask
[ "transformers", "pytorch", "safetensors", "longformer", "fill-mask", "doi:10.57967/hf/2174", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-05-03T09:41:26Z
2025-02-07T09:30:55+00:00
6
0
--- license: cc-by-nc-sa-4.0 --- # DRAGON Longformer large mixed-domain Pretrained model on Dutch clinical reports using a masked language modeling (MLM) objective. It was introduced in [this](#pending) paper.&nbsp;The model was first pretrained using general domain data, as specified [here](https://huggingface.co/allenai/longformer-large-4096). The pretrained model was taken from HuggingFace: [`allenai/longformer-large-4096`](https://huggingface.co/allenai/longformer-large-4096). Subsequently, the model was pretrained using domain-specific data (i.e., clinical reports). The tokenizer of [`allenai/longformer-large-4096`](https://huggingface.co/allenai/longformer-large-4096) was used. ## Model description Longformer is a transformers model that was pretrained on a large corpus of Dutch clinical reports in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labeling them in any way with an automatic process to generate inputs and labels from those texts. This way, the model learns an inner representation of the Dutch medical language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled reports, for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Model variations Multiple architectures were pretrained for the DRAGON challenge. | Model | #params | Language | |------------------------|--------------------------------|-------| | [`joeranbosma/dragon-bert-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-bert-base-mixed-domain) | 109M | Dutch → Dutch | | [`joeranbosma/dragon-roberta-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-roberta-base-mixed-domain) | 278M | Multiple → Dutch | | [`joeranbosma/dragon-roberta-large-mixed-domain`](https://huggingface.co/joeranbosma/dragon-roberta-large-mixed-domain) | 560M | Multiple → Dutch | | [`joeranbosma/dragon-longformer-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-longformer-base-mixed-domain) | 149M | English → Dutch | | [`joeranbosma/dragon-longformer-large-mixed-domain`](https://huggingface.co/joeranbosma/dragon-longformer-large-mixed-domain) | 435M | English → Dutch | | [`joeranbosma/dragon-bert-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-bert-base-domain-specific) | 109M | Dutch | | [`joeranbosma/dragon-roberta-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-roberta-base-domain-specific) | 278M | Dutch | | [`joeranbosma/dragon-roberta-large-domain-specific`](https://huggingface.co/joeranbosma/dragon-roberta-large-domain-specific) | 560M | Dutch | | [`joeranbosma/dragon-longformer-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-longformer-base-domain-specific) | 149M | Dutch | | [`joeranbosma/dragon-longformer-large-domain-specific`](https://huggingface.co/joeranbosma/dragon-longformer-large-domain-specific) | 435M | Dutch | ## Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole text (e.g., a clinical report) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ## How to use You can use this model directly with a pipeline for masked language modeling: ```python from transformers import pipeline unmasker = pipeline("fill-mask", model="joeranbosma/dragon-longformer-large-mixed-domain") unmasker("Dit onderzoek geen aanwijzingen voor significant carcinoom. PIRADS <mask>.") ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("joeranbosma/dragon-longformer-large-mixed-domain") model = AutoModel.from_pretrained("joeranbosma/dragon-longformer-large-mixed-domain") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors="pt") output = model(**encoded_input) ``` ## Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. ## Training data For pretraining, 4,333,201 clinical reports (466,351 consecutive patients) were selected from Ziekenhuisgroep Twente from patients with a diagnostic or interventional visit between 13 July 2000 and 25 April 2023. 180,439 duplicate clinical reports (179,808 patients) were excluded, resulting in 4,152,762 included reports (463,692 patients). These reports were split into training (80%, 3,322,209 reports), validation (10%, 415,276 reports), and testing (10%, 415,277 reports). The testing reports were set aside for future analysis and are not used for pretraining. ## Training procedure ### Pretraining The model was pretrained using masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally masks the future tokens. It allows the model to learn a bidirectional representation of the sentence. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. The HuggingFace implementation was used for pretraining: [`run_mlm.py`](https://github.com/huggingface/transformers/blob/7c6ec195adbfcd22cb6baeee64dd3c24a4b80c74/examples/pytorch/language-modeling/run_mlm.py). ### Pretraining hyperparameters The following hyperparameters were used during pretraining: - `learning_rate`: 5e-05 - `train_batch_size`: 4 - `eval_batch_size`: 4 - `seed`: 42 - `gradient_accumulation_steps`: 4 - `total_train_batch_size`: 16 - `optimizer`: Adam with betas=(0.9,0.999) and epsilon=1e-08 - `lr_scheduler_type`: linear - `num_epochs`: 3.0 - `max_seq_length`: 4096 ### Framework versions - Transformers 4.29.0.dev0 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3 ## Evaluation results Pending evaluation on the DRAGON benchmark. ### BibTeX entry and citation info ```bibtex @article{PENDING} ```
null
BioNLP
# DRAGON Longformer large mixed-domain Pretrained model on Dutch clinical reports using a masked language modeling (MLM) objective. It was introduced in [this](#pending) paper.&nbsp;The model was first pretrained using general domain data, as specified [here](https://huggingface.co/allenai/longformer-large-4096). The pretrained model was taken from HuggingFace: [`allenai/longformer-large-4096`](https://huggingface.co/allenai/longformer-large-4096). Subsequently, the model was pretrained using domain-specific data (i.e., clinical reports). The tokenizer of [`allenai/longformer-large-4096`](https://huggingface.co/allenai/longformer-large-4096) was used. ## Model description Longformer is a transformers model that was pretrained on a large corpus of Dutch clinical reports in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labeling them in any way with an automatic process to generate inputs and labels from those texts. This way, the model learns an inner representation of the Dutch medical language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled reports, for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Model variations Multiple architectures were pretrained for the DRAGON challenge. | Model | #params | Language | |------------------------|--------------------------------|-------| | [`joeranbosma/dragon-bert-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-bert-base-mixed-domain) | 109M | Dutch → Dutch | | [`joeranbosma/dragon-roberta-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-roberta-base-mixed-domain) | 278M | Multiple → Dutch | | [`joeranbosma/dragon-roberta-large-mixed-domain`](https://huggingface.co/joeranbosma/dragon-roberta-large-mixed-domain) | 560M | Multiple → Dutch | | [`joeranbosma/dragon-longformer-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-longformer-base-mixed-domain) | 149M | English → Dutch | | [`joeranbosma/dragon-longformer-large-mixed-domain`](https://huggingface.co/joeranbosma/dragon-longformer-large-mixed-domain) | 435M | English → Dutch | | [`joeranbosma/dragon-bert-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-bert-base-domain-specific) | 109M | Dutch | | [`joeranbosma/dragon-roberta-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-roberta-base-domain-specific) | 278M | Dutch | | [`joeranbosma/dragon-roberta-large-domain-specific`](https://huggingface.co/joeranbosma/dragon-roberta-large-domain-specific) | 560M | Dutch | | [`joeranbosma/dragon-longformer-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-longformer-base-domain-specific) | 149M | Dutch | | [`joeranbosma/dragon-longformer-large-domain-specific`](https://huggingface.co/joeranbosma/dragon-longformer-large-domain-specific) | 435M | Dutch | ## Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole text (e.g., a clinical report) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ## How to use You can use this model directly with a pipeline for masked language modeling: ```python from transformers import pipeline unmasker = pipeline("fill-mask", model="joeranbosma/dragon-longformer-large-mixed-domain") unmasker("Dit onderzoek geen aanwijzingen voor significant carcinoom. PIRADS <mask>.") ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("joeranbosma/dragon-longformer-large-mixed-domain") model = AutoModel.from_pretrained("joeranbosma/dragon-longformer-large-mixed-domain") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors="pt") output = model(**encoded_input) ``` ## Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. ## Training data For pretraining, 4,333,201 clinical reports (466,351 consecutive patients) were selected from Ziekenhuisgroep Twente from patients with a diagnostic or interventional visit between 13 July 2000 and 25 April 2023. 180,439 duplicate clinical reports (179,808 patients) were excluded, resulting in 4,152,762 included reports (463,692 patients). These reports were split into training (80%, 3,322,209 reports), validation (10%, 415,276 reports), and testing (10%, 415,277 reports). The testing reports were set aside for future analysis and are not used for pretraining. ## Training procedure ### Pretraining The model was pretrained using masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally masks the future tokens. It allows the model to learn a bidirectional representation of the sentence. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. The HuggingFace implementation was used for pretraining: [`run_mlm.py`](https://github.com/huggingface/transformers/blob/7c6ec195adbfcd22cb6baeee64dd3c24a4b80c74/examples/pytorch/language-modeling/run_mlm.py). ### Pretraining hyperparameters The following hyperparameters were used during pretraining: - `learning_rate`: 5e-05 - `train_batch_size`: 4 - `eval_batch_size`: 4 - `seed`: 42 - `gradient_accumulation_steps`: 4 - `total_train_batch_size`: 16 - `optimizer`: Adam with betas=(0.9,0.999) and epsilon=1e-08 - `lr_scheduler_type`: linear - `num_epochs`: 3.0 - `max_seq_length`: 4096 ### Framework versions - Transformers 4.29.0.dev0 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3 ## Evaluation results Pending evaluation on the DRAGON benchmark. ### BibTeX entry and citation info ```bibtex @article{PENDING} ```
{"license": "cc-by-nc-sa-4.0"}
task
[ "QUESTION_ANSWERING" ]
46,024
predibase/viggo
predibase
text-generation
[ "peft", "safetensors", "text-generation", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "region:us" ]
2024-02-19T19:13:58Z
2024-02-21T19:13:55+00:00
76
1
--- base_model: mistralai/Mistral-7B-v0.1 library_name: peft pipeline_tag: text-generation --- Description: Translation of video game meaning representations to natural language\ Original dataset: https://huggingface.co/datasets/GEM/viggo \ ---\ Try querying this adapter for free in Lora Land at https://predibase.com/lora-land! \ The adapter_category is Structured-to-Text and the name is Structured-to-Text (viggo)\ ---\ Sample input: Here are two examples of meaning representations being translated into plain English:\n\nExample representation: "request(release_year[2014], specifier[terrible])"\nExample output: "Were there even any terrible games in 2014?"\n\nExample representation: "give_opinion(name[Little Nightmares], rating[good], genres[adventure, platformer, puzzle], player_perspective[side view])"\nExample output: "Adventure games that combine platforming and puzzles can be frustrating to play, but the side view perspective is perfect for them. That's why I enjoyed playing Little Nightmares."\n\nUsing the previous examples as guidelines, please translate the following representation into plain English:\nRepresentation: suggest(name[Little Big Adventure], player_perspective[third person], platforms[PC])\nOutput:\ ---\ Sample output: Do you like third person PC games like Little Big Adventure?\ ---\ Try using this adapter yourself! ``` from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mistral-7B-v0.1" peft_model_id = "predibase/viggo" model = AutoModelForCausalLM.from_pretrained(model_id) model.load_adapter(peft_model_id) ```
null
Non_BioNLP
Description: Translation of video game meaning representations to natural language\ Original dataset: https://huggingface.co/datasets/GEM/viggo \ ---\ Try querying this adapter for free in Lora Land at https://predibase.com/lora-land! \ The adapter_category is Structured-to-Text and the name is Structured-to-Text (viggo)\ ---\ Sample input: Here are two examples of meaning representations being translated into plain English:\n\nExample representation: "request(release_year[2014], specifier[terrible])"\nExample output: "Were there even any terrible games in 2014?"\n\nExample representation: "give_opinion(name[Little Nightmares], rating[good], genres[adventure, platformer, puzzle], player_perspective[side view])"\nExample output: "Adventure games that combine platforming and puzzles can be frustrating to play, but the side view perspective is perfect for them. That's why I enjoyed playing Little Nightmares."\n\nUsing the previous examples as guidelines, please translate the following representation into plain English:\nRepresentation: suggest(name[Little Big Adventure], player_perspective[third person], platforms[PC])\nOutput:\ ---\ Sample output: Do you like third person PC games like Little Big Adventure?\ ---\ Try using this adapter yourself! ``` from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mistral-7B-v0.1" peft_model_id = "predibase/viggo" model = AutoModelForCausalLM.from_pretrained(model_id) model.load_adapter(peft_model_id) ```
{"base_model": "mistralai/Mistral-7B-v0.1", "library_name": "peft", "pipeline_tag": "text-generation"}
task
[ "TRANSLATION" ]
46,025
LoneStriker/law-LLM-13B-8.0bpw-h8-exl2
LoneStriker
text-generation
[ "transformers", "pytorch", "llama", "text-generation", "legal", "en", "dataset:Open-Orca/OpenOrca", "dataset:GAIR/lima", "dataset:WizardLM/WizardLM_evol_instruct_V2_196k", "dataset:EleutherAI/pile", "arxiv:2309.09530", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-01-01T22:19:29Z
2024-01-01T22:24:52+00:00
6
0
--- datasets: - Open-Orca/OpenOrca - GAIR/lima - WizardLM/WizardLM_evol_instruct_V2_196k - EleutherAI/pile language: - en metrics: - accuracy pipeline_tag: text-generation tags: - legal --- # Adapt (Large) Language Models to Domains This repo contains the domain-specific base model developed from **LLaMA-1-13B**, using the method in our paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530). We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**. ### 🤗 We are currently working hard on developing models across different domains, scales and architectures! Please stay tuned! 🤗 **************************** **Updates** **************************** * 12/19: Released our [13B base models](https://huggingface.co/AdaptLLM/law-LLM-13B) developed from LLaMA-1-13B. * 12/8: Released our [chat models](https://huggingface.co/AdaptLLM/law-chat) developed from LLaMA-2-Chat-7B. * 9/18: Released our [paper](https://huggingface.co/papers/2309.09530), [code](https://github.com/microsoft/LMOps), [data](https://huggingface.co/datasets/AdaptLLM/law-tasks), and [base models](https://huggingface.co/AdaptLLM/law-LLM) developed from LLaMA-1-7B. ## Domain-Specific LLaMA-1 ### LLaMA-1-7B In our paper, we develop three domain-specific models from LLaMA-1-7B, which are also available in Huggingface: [Biomedicine-LLM](https://huggingface.co/AdaptLLM/medicine-LLM), [Finance-LLM](https://huggingface.co/AdaptLLM/finance-LLM) and [Law-LLM](https://huggingface.co/AdaptLLM/law-LLM), the performances of our AdaptLLM compared to other domain-specific LLMs are: <p align='center'> <img src="https://hf.fast360.xyz/production/uploads/650801ced5578ef7e20b33d4/6efPwitFgy-pLTzvccdcP.png" width="700"> </p> ### LLaMA-1-13B Moreover, we scale up our base model to LLaMA-1-13B to see if **our method is similarly effective for larger-scale models**, and the results are consistently positive too: [Biomedicine-LLM-13B](https://huggingface.co/AdaptLLM/medicine-LLM-13B), [Finance-LLM-13B](https://huggingface.co/AdaptLLM/finance-LLM-13B) and [Law-LLM-13B](https://huggingface.co/AdaptLLM/law-LLM-13B). ## Domain-Specific LLaMA-2-Chat Our method is also effective for aligned models! LLaMA-2-Chat requires a [specific data format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), and our **reading comprehension can perfectly fit the data format** by transforming the reading comprehension into a multi-turn conversation. We have also open-sourced chat models in different domains: [Biomedicine-Chat](https://huggingface.co/AdaptLLM/medicine-chat), [Finance-Chat](https://huggingface.co/AdaptLLM/finance-chat) and [Law-Chat](https://huggingface.co/AdaptLLM/law-chat) For example, to chat with the law model: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("AdaptLLM/law-chat") tokenizer = AutoTokenizer.from_pretrained("AdaptLLM/law-chat", use_fast=False) # Put your input here: user_input = '''Question: Which of the following is false about ex post facto laws? Options: - They make criminal an act that was innocent when committed. - They prescribe greater punishment for an act than was prescribed when it was done. - They increase the evidence required to convict a person than when the act was done. - They alter criminal offenses or punishment in a substantially prejudicial manner for the purpose of punishing a person for some past activity. Please provide your choice first and then provide explanations if possible.''' # We use the prompt template of LLaMA-2-Chat demo prompt = f"<s>[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\n\n{user_input} [/INST]" inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).input_ids.to(model.device) outputs = model.generate(input_ids=inputs, max_length=4096)[0] answer_start = int(inputs.shape[-1]) pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True) print(f'### User Input:\n{user_input}\n\n### Assistant Output:\n{pred}') ``` ## Domain-Specific Tasks To easily reproduce our results, we have uploaded the filled-in zero/few-shot input instructions and output completions of each domain-specific task: [biomedicine-tasks](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), [finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [law-tasks](https://huggingface.co/datasets/AdaptLLM/law-tasks). **Note:** those filled-in instructions are specifically tailored for models before alignment and do NOT fit for the specific data format required for chat models. ## Citation If you find our work helpful, please cite us: ```bibtex @article{adaptllm, title = {Adapting Large Language Models via Reading Comprehension}, author = {Daixuan Cheng and Shaohan Huang and Furu Wei}, journal = {CoRR}, volume = {abs/2309.09530}, year = {2023} } ```
null
Non_BioNLP
# Adapt (Large) Language Models to Domains This repo contains the domain-specific base model developed from **LLaMA-1-13B**, using the method in our paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530). We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**. ### 🤗 We are currently working hard on developing models across different domains, scales and architectures! Please stay tuned! 🤗 **************************** **Updates** **************************** * 12/19: Released our [13B base models](https://huggingface.co/AdaptLLM/law-LLM-13B) developed from LLaMA-1-13B. * 12/8: Released our [chat models](https://huggingface.co/AdaptLLM/law-chat) developed from LLaMA-2-Chat-7B. * 9/18: Released our [paper](https://huggingface.co/papers/2309.09530), [code](https://github.com/microsoft/LMOps), [data](https://huggingface.co/datasets/AdaptLLM/law-tasks), and [base models](https://huggingface.co/AdaptLLM/law-LLM) developed from LLaMA-1-7B. ## Domain-Specific LLaMA-1 ### LLaMA-1-7B In our paper, we develop three domain-specific models from LLaMA-1-7B, which are also available in Huggingface: [Biomedicine-LLM](https://huggingface.co/AdaptLLM/medicine-LLM), [Finance-LLM](https://huggingface.co/AdaptLLM/finance-LLM) and [Law-LLM](https://huggingface.co/AdaptLLM/law-LLM), the performances of our AdaptLLM compared to other domain-specific LLMs are: <p align='center'> <img src="https://hf.fast360.xyz/production/uploads/650801ced5578ef7e20b33d4/6efPwitFgy-pLTzvccdcP.png" width="700"> </p> ### LLaMA-1-13B Moreover, we scale up our base model to LLaMA-1-13B to see if **our method is similarly effective for larger-scale models**, and the results are consistently positive too: [Biomedicine-LLM-13B](https://huggingface.co/AdaptLLM/medicine-LLM-13B), [Finance-LLM-13B](https://huggingface.co/AdaptLLM/finance-LLM-13B) and [Law-LLM-13B](https://huggingface.co/AdaptLLM/law-LLM-13B). ## Domain-Specific LLaMA-2-Chat Our method is also effective for aligned models! LLaMA-2-Chat requires a [specific data format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), and our **reading comprehension can perfectly fit the data format** by transforming the reading comprehension into a multi-turn conversation. We have also open-sourced chat models in different domains: [Biomedicine-Chat](https://huggingface.co/AdaptLLM/medicine-chat), [Finance-Chat](https://huggingface.co/AdaptLLM/finance-chat) and [Law-Chat](https://huggingface.co/AdaptLLM/law-chat) For example, to chat with the law model: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("AdaptLLM/law-chat") tokenizer = AutoTokenizer.from_pretrained("AdaptLLM/law-chat", use_fast=False) # Put your input here: user_input = '''Question: Which of the following is false about ex post facto laws? Options: - They make criminal an act that was innocent when committed. - They prescribe greater punishment for an act than was prescribed when it was done. - They increase the evidence required to convict a person than when the act was done. - They alter criminal offenses or punishment in a substantially prejudicial manner for the purpose of punishing a person for some past activity. Please provide your choice first and then provide explanations if possible.''' # We use the prompt template of LLaMA-2-Chat demo prompt = f"<s>[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\n\n{user_input} [/INST]" inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).input_ids.to(model.device) outputs = model.generate(input_ids=inputs, max_length=4096)[0] answer_start = int(inputs.shape[-1]) pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True) print(f'### User Input:\n{user_input}\n\n### Assistant Output:\n{pred}') ``` ## Domain-Specific Tasks To easily reproduce our results, we have uploaded the filled-in zero/few-shot input instructions and output completions of each domain-specific task: [biomedicine-tasks](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), [finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [law-tasks](https://huggingface.co/datasets/AdaptLLM/law-tasks). **Note:** those filled-in instructions are specifically tailored for models before alignment and do NOT fit for the specific data format required for chat models. ## Citation If you find our work helpful, please cite us: ```bibtex @article{adaptllm, title = {Adapting Large Language Models via Reading Comprehension}, author = {Daixuan Cheng and Shaohan Huang and Furu Wei}, journal = {CoRR}, volume = {abs/2309.09530}, year = {2023} } ```
{"datasets": ["Open-Orca/OpenOrca", "GAIR/lima", "WizardLM/WizardLM_evol_instruct_V2_196k", "EleutherAI/pile"], "language": ["en"], "metrics": ["accuracy"], "pipeline_tag": "text-generation", "tags": ["legal"]}
task
[ "QUESTION_ANSWERING" ]
46,026
LucasG/sound_project
LucasG
summarization
[ "tensorflowtts", "summarization", "fr", "region:us" ]
2023-02-16T15:38:34Z
2023-02-16T15:41:19+00:00
0
0
--- language: - fr library_name: tensorflowtts metrics: - accuracy pipeline_tag: summarization ---
null
Non_BioNLP
{"language": ["fr"], "library_name": "tensorflowtts", "metrics": ["accuracy"], "pipeline_tag": "summarization"}
task
[ "SUMMARIZATION" ]
46,027
cansurav/bert-base-uncased-finetuned-cola-dropout-0.1
cansurav
text-classification
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-05-05T10:35:22Z
2023-05-05T10:49:51+00:00
10
0
--- datasets: - glue license: apache-2.0 metrics: - matthews_correlation tags: - generated_from_trainer model-index: - name: bert-base-uncased-finetuned-cola-dropout-0.1 results: - task: type: text-classification name: Text Classification dataset: name: glue type: glue config: cola split: validation args: cola metrics: - type: matthews_correlation value: 0.593197037544882 name: Matthews Correlation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-cola-dropout-0.1 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 1.1127 - Matthews Correlation: 0.5932 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.49 | 1.0 | 535 | 0.5310 | 0.4914 | | 0.3003 | 2.0 | 1070 | 0.5391 | 0.5572 | | 0.2033 | 3.0 | 1605 | 0.6975 | 0.5473 | | 0.1427 | 4.0 | 2140 | 0.8513 | 0.5612 | | 0.0998 | 5.0 | 2675 | 0.8598 | 0.5829 | | 0.0783 | 6.0 | 3210 | 1.1127 | 0.5932 | | 0.0456 | 7.0 | 3745 | 1.0697 | 0.5890 | | 0.0395 | 8.0 | 4280 | 1.1813 | 0.5782 | | 0.0277 | 9.0 | 4815 | 1.2958 | 0.5727 | | 0.0205 | 10.0 | 5350 | 1.3045 | 0.5832 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-cola-dropout-0.1 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 1.1127 - Matthews Correlation: 0.5932 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.49 | 1.0 | 535 | 0.5310 | 0.4914 | | 0.3003 | 2.0 | 1070 | 0.5391 | 0.5572 | | 0.2033 | 3.0 | 1605 | 0.6975 | 0.5473 | | 0.1427 | 4.0 | 2140 | 0.8513 | 0.5612 | | 0.0998 | 5.0 | 2675 | 0.8598 | 0.5829 | | 0.0783 | 6.0 | 3210 | 1.1127 | 0.5932 | | 0.0456 | 7.0 | 3745 | 1.0697 | 0.5890 | | 0.0395 | 8.0 | 4280 | 1.1813 | 0.5782 | | 0.0277 | 9.0 | 4815 | 1.2958 | 0.5727 | | 0.0205 | 10.0 | 5350 | 1.3045 | 0.5832 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
{"datasets": ["glue"], "license": "apache-2.0", "metrics": ["matthews_correlation"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-base-uncased-finetuned-cola-dropout-0.1", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "config": "cola", "split": "validation", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.593197037544882, "name": "Matthews Correlation"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,028
trnt/twitter_emotions
trnt
text-classification
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05Z
2021-11-20T04:31:53+00:00
79
1
--- datasets: - emotion license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: twitter_emotions results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion args: default metrics: - type: accuracy value: 0.9375 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # twitter_emotions This model is a fine-tuned version of [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1647 - Accuracy: 0.9375 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2486 | 1.0 | 2000 | 0.2115 | 0.931 | | 0.135 | 2.0 | 4000 | 0.1725 | 0.936 | | 0.1041 | 3.0 | 6000 | 0.1647 | 0.9375 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # twitter_emotions This model is a fine-tuned version of [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1647 - Accuracy: 0.9375 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2486 | 1.0 | 2000 | 0.2115 | 0.931 | | 0.135 | 2.0 | 4000 | 0.1725 | 0.936 | | 0.1041 | 3.0 | 6000 | 0.1647 | 0.9375 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
{"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "twitter_emotions", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9375, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,029
gokuls/hBERTv2_new_pretrain_48_ver2_mrpc
gokuls
text-classification
[ "transformers", "pytorch", "hybridbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:gokuls/bert_12_layer_model_v2_complete_training_new_48", "base_model:finetune:gokuls/bert_12_layer_model_v2_complete_training_new_48", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-10-18T00:26:54Z
2023-10-18T00:41:19+00:00
36
0
--- base_model: gokuls/bert_12_layer_model_v2_complete_training_new_48 datasets: - glue language: - en metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: hBERTv2_new_pretrain_48_ver2_mrpc results: - task: type: text-classification name: Text Classification dataset: name: GLUE MRPC type: glue config: mrpc split: validation args: mrpc metrics: - type: accuracy value: 0.696078431372549 name: Accuracy - type: f1 value: 0.7832167832167833 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hBERTv2_new_pretrain_48_ver2_mrpc This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_48](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_48) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.5864 - Accuracy: 0.6961 - F1: 0.7832 - Combined Score: 0.7396 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:| | 0.664 | 1.0 | 58 | 0.6194 | 0.6716 | 0.7481 | 0.7098 | | 0.6055 | 2.0 | 116 | 0.5864 | 0.6961 | 0.7832 | 0.7396 | | 0.5319 | 3.0 | 174 | 0.6058 | 0.6838 | 0.7772 | 0.7305 | | 0.4447 | 4.0 | 232 | 0.7045 | 0.6667 | 0.7679 | 0.7173 | | 0.3601 | 5.0 | 290 | 0.7750 | 0.6642 | 0.7609 | 0.7126 | | 0.2754 | 6.0 | 348 | 1.0176 | 0.6789 | 0.7813 | 0.7301 | | 0.1895 | 7.0 | 406 | 1.4308 | 0.6299 | 0.7229 | 0.6764 | ### Framework versions - Transformers 4.34.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.14.5 - Tokenizers 0.14.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hBERTv2_new_pretrain_48_ver2_mrpc This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_48](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_48) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.5864 - Accuracy: 0.6961 - F1: 0.7832 - Combined Score: 0.7396 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:| | 0.664 | 1.0 | 58 | 0.6194 | 0.6716 | 0.7481 | 0.7098 | | 0.6055 | 2.0 | 116 | 0.5864 | 0.6961 | 0.7832 | 0.7396 | | 0.5319 | 3.0 | 174 | 0.6058 | 0.6838 | 0.7772 | 0.7305 | | 0.4447 | 4.0 | 232 | 0.7045 | 0.6667 | 0.7679 | 0.7173 | | 0.3601 | 5.0 | 290 | 0.7750 | 0.6642 | 0.7609 | 0.7126 | | 0.2754 | 6.0 | 348 | 1.0176 | 0.6789 | 0.7813 | 0.7301 | | 0.1895 | 7.0 | 406 | 1.4308 | 0.6299 | 0.7229 | 0.6764 | ### Framework versions - Transformers 4.34.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.14.5 - Tokenizers 0.14.1
{"base_model": "gokuls/bert_12_layer_model_v2_complete_training_new_48", "datasets": ["glue"], "language": ["en"], "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "hBERTv2_new_pretrain_48_ver2_mrpc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE MRPC", "type": "glue", "config": "mrpc", "split": "validation", "args": "mrpc"}, "metrics": [{"type": "accuracy", "value": 0.696078431372549, "name": "Accuracy"}, {"type": "f1", "value": 0.7832167832167833, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,030
Xenova/opus-mt-vi-en
Xenova
translation
[ "transformers.js", "onnx", "marian", "text2text-generation", "translation", "base_model:Helsinki-NLP/opus-mt-vi-en", "base_model:quantized:Helsinki-NLP/opus-mt-vi-en", "region:us" ]
2023-08-08T16:23:51Z
2024-10-08T13:40:29+00:00
75
0
--- base_model: Helsinki-NLP/opus-mt-vi-en library_name: transformers.js pipeline_tag: translation --- https://huggingface.co/Helsinki-NLP/opus-mt-vi-en with ONNX weights to be compatible with Transformers.js. Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
null
Non_BioNLP
ERROR: type should be string, got "\nhttps://huggingface.co/Helsinki-NLP/opus-mt-vi-en with ONNX weights to be compatible with Transformers.js.\n\nNote: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`)."
{"base_model": "Helsinki-NLP/opus-mt-vi-en", "library_name": "transformers.js", "pipeline_tag": "translation"}
task
[ "TRANSLATION" ]
46,031
FINGU-AI/Qwen2.5-32B-Lora-HQ-e-3
FINGU-AI
null
[ "peft", "safetensors", "en", "ko", "zh", "pt", "ja", "uz", "tl", "th", "vi", "id", "base_model:Qwen/Qwen2.5-32B-Instruct", "base_model:adapter:Qwen/Qwen2.5-32B-Instruct", "license:mit", "region:us" ]
2024-11-10T15:18:13Z
2024-11-10T15:19:11+00:00
4
0
--- base_model: Qwen/Qwen2.5-32B-Instruct language: - en - ko - zh - pt - ja - uz - tl - th - vi - id library_name: peft license: mit --- # FINGU-AI/Qwen2.5-32B-Lora-HQ-e-3 ## Overview `FINGU-AI/Qwen2.5-32B-Lora-HQ-e-3` is a powerful causal language model designed for a variety of natural language processing (NLP) tasks, including machine translation, text generation, and chat-based applications. This model is particularly useful for translating between Korean and Uzbek, as well as supporting other custom NLP tasks through flexible input. ## Model Details - **Model ID**: `FINGU-AI/Qwen2.5-32B-Lora-HQ-e-3` - **Architecture**: Causal Language Model (LM) - **Parameters**: 32 billion - **Precision**: Torch BF16 for efficient GPU memory usage - **Attention**: SDPA (Scaled Dot-Product Attention) - **Primary Use Case**: Translation (e.g., Korean to Uzbek), text generation, and dialogue systems. ## Example Usage ### Installation Make sure to install the required packages: ```bash pip install torch transformers ``` ### Loading the Model ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch # Model and Tokenizer model_id = 'FINGU-AI/Qwen2.5-32B-Lora-HQ-e-3' model = AutoModelForCausalLM.from_pretrained(model_id, attn_implementation="sdpa", torch_dtype=torch.bfloat16) tokenizer = AutoTokenizer.from_pretrained(model_id) model.to('cuda') # Input Messages for Translation messages = [ {"role": "system", "content": "translate korean to Uzbek"}, {"role": "user", "content": """새로운 은행 계좌를 개설하는 절차는 다음과 같습니다: 1. 계좌 개설 목적과 신분 확인을 위한 서류 제출 2. 서류 검토 과정을 거치는 것 3. 고객님의 신원 확인 절차를 진행하는 것 4. 모든 절차가 완료되면 계좌 개설이 가능합니다. 계좌 개설을 원하시는 경우, 신분증과 함께 방문해 주시면 됩니다. """}, ] # Tokenize and Generate Response input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to('cuda') outputs = model.generate( input_ids, max_new_tokens=500, do_sample=True, ) # Decode and Print the Translation response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ```
null
Non_BioNLP
# FINGU-AI/Qwen2.5-32B-Lora-HQ-e-3 ## Overview `FINGU-AI/Qwen2.5-32B-Lora-HQ-e-3` is a powerful causal language model designed for a variety of natural language processing (NLP) tasks, including machine translation, text generation, and chat-based applications. This model is particularly useful for translating between Korean and Uzbek, as well as supporting other custom NLP tasks through flexible input. ## Model Details - **Model ID**: `FINGU-AI/Qwen2.5-32B-Lora-HQ-e-3` - **Architecture**: Causal Language Model (LM) - **Parameters**: 32 billion - **Precision**: Torch BF16 for efficient GPU memory usage - **Attention**: SDPA (Scaled Dot-Product Attention) - **Primary Use Case**: Translation (e.g., Korean to Uzbek), text generation, and dialogue systems. ## Example Usage ### Installation Make sure to install the required packages: ```bash pip install torch transformers ``` ### Loading the Model ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch # Model and Tokenizer model_id = 'FINGU-AI/Qwen2.5-32B-Lora-HQ-e-3' model = AutoModelForCausalLM.from_pretrained(model_id, attn_implementation="sdpa", torch_dtype=torch.bfloat16) tokenizer = AutoTokenizer.from_pretrained(model_id) model.to('cuda') # Input Messages for Translation messages = [ {"role": "system", "content": "translate korean to Uzbek"}, {"role": "user", "content": """새로운 은행 계좌를 개설하는 절차는 다음과 같습니다: 1. 계좌 개설 목적과 신분 확인을 위한 서류 제출 2. 서류 검토 과정을 거치는 것 3. 고객님의 신원 확인 절차를 진행하는 것 4. 모든 절차가 완료되면 계좌 개설이 가능합니다. 계좌 개설을 원하시는 경우, 신분증과 함께 방문해 주시면 됩니다. """}, ] # Tokenize and Generate Response input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to('cuda') outputs = model.generate( input_ids, max_new_tokens=500, do_sample=True, ) # Decode and Print the Translation response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ```
{"base_model": "Qwen/Qwen2.5-32B-Instruct", "language": ["en", "ko", "zh", "pt", "ja", "uz", "tl", "th", "vi", "id"], "library_name": "peft", "license": "mit"}
task
[ "TRANSLATION" ]
46,033
gaudi/opus-mt-es-ceb-ctranslate2
gaudi
translation
[ "transformers", "marian", "ctranslate2", "translation", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2024-07-22T15:43:56Z
2024-10-19T02:32:34+00:00
6
0
--- license: apache-2.0 tags: - ctranslate2 - translation --- # Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-es-ceb) - This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2). - This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil). # What is CTranslate2? [CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models. CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include: - Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper - Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon - Encoder-only models: BERT, DistilBERT, XLM-RoBERTa The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration. # CTranslate2 Benchmarks Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset. The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers. Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. ## CPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 | | Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 | | Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 | | CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 | | CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 | ## GPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 | | Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 | | CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 | | CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 | `Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.` **Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br /> **Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-es-ceb).** ## Internal Benchmarks Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality. # CTranslate2 Installation ```bash pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0 ``` ### ct2-transformers-converter Command Used: ```bash ct2-transformers-converter --model Helsinki-NLP/opus-mt-es-ceb --output_dir ./ctranslate2/opus-mt-es-ceb-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16 ``` # CTranslate2 Converted Checkpoint Information: **Compatible With:** - [ctranslate2](https://github.com/OpenNMT/CTranslate2) - [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2) **Compute Type:** - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` # Sample Code - ctranslate2 #### Clone the repository to the working directory or wherever you wish to store the model artifacts. #### ```bash git clone https://huggingface.co/gaudi/opus-mt-es-ceb-ctranslate2 ``` #### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. #### ```python from ctranslate2 import Translator import transformers model_dir = "./opus-mt-es-ceb-ctranslate2" # Path to model directory. translator = Translator( model_path=model_dir, device="cuda", # cpu, cuda, or auto. inter_threads=1, # Maximum number of parallel translations. intra_threads=4, # Number of OpenMP threads per translator. compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda. ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir) source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX.")) results = translator.translate_batch([source]) target = results[0].hypotheses[0] print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target))) ``` # Sample Code - hf-hub-ctranslate2 **Derived From [michaelfeil](https://huggingface.co/michaelfeil):** ```python from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub from transformers import AutoTokenizer model_name = "gaudi/opus-mt-es-ceb-ctranslate2" model = TranslatorCT2fromHfHub( model_name_or_path=model_name, device="cuda", compute_type="int8_float16", tokenizer=AutoTokenizer.from_pretrained(model_name) ) outputs = model.generate( text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"], ) print(outputs) ``` # License and other remarks: License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-es-ceb) by Helsinki-NLP.
null
Non_BioNLP
# Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-es-ceb) - This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2). - This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil). # What is CTranslate2? [CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models. CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include: - Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper - Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon - Encoder-only models: BERT, DistilBERT, XLM-RoBERTa The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration. # CTranslate2 Benchmarks Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset. The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers. Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. ## CPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 | | Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 | | Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 | | CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 | | CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 | ## GPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 | | Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 | | CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 | | CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 | `Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.` **Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br /> **Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-es-ceb).** ## Internal Benchmarks Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality. # CTranslate2 Installation ```bash pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0 ``` ### ct2-transformers-converter Command Used: ```bash ct2-transformers-converter --model Helsinki-NLP/opus-mt-es-ceb --output_dir ./ctranslate2/opus-mt-es-ceb-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16 ``` # CTranslate2 Converted Checkpoint Information: **Compatible With:** - [ctranslate2](https://github.com/OpenNMT/CTranslate2) - [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2) **Compute Type:** - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` # Sample Code - ctranslate2 #### Clone the repository to the working directory or wherever you wish to store the model artifacts. #### ```bash git clone https://huggingface.co/gaudi/opus-mt-es-ceb-ctranslate2 ``` #### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. #### ```python from ctranslate2 import Translator import transformers model_dir = "./opus-mt-es-ceb-ctranslate2" # Path to model directory. translator = Translator( model_path=model_dir, device="cuda", # cpu, cuda, or auto. inter_threads=1, # Maximum number of parallel translations. intra_threads=4, # Number of OpenMP threads per translator. compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda. ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir) source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX.")) results = translator.translate_batch([source]) target = results[0].hypotheses[0] print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target))) ``` # Sample Code - hf-hub-ctranslate2 **Derived From [michaelfeil](https://huggingface.co/michaelfeil):** ```python from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub from transformers import AutoTokenizer model_name = "gaudi/opus-mt-es-ceb-ctranslate2" model = TranslatorCT2fromHfHub( model_name_or_path=model_name, device="cuda", compute_type="int8_float16", tokenizer=AutoTokenizer.from_pretrained(model_name) ) outputs = model.generate( text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"], ) print(outputs) ``` # License and other remarks: License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-es-ceb) by Helsinki-NLP.
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
task
[ "TRANSLATION" ]
46,034
serdarcaglar/roberta-base-turkish-scientific-cased-ONNX
serdarcaglar
null
[ "onnx", "roberta", "tr", "region:us" ]
2024-11-28T15:44:55Z
2024-11-28T15:48:45+00:00
12
1
--- language: - tr --- # Model Card: Turkish Scientific RoBERTa ONNX ## Model Description ONNX version of roberta-base-turkish-scientific-cased, specialized for Turkish scientific text analysis. ## Intended Use - Scientific text analysis in Turkish - Text comprehension - Fill-mask predictions - Scientific text summarization ## Training Data - Source: Turkish scientific article abstracts from trdizin, yöktez, and t.k. - Training Duration: 3+ days - Steps: 2M - Built from scratch, no fine-tuning ## Technical Specifications - Base Architecture: RoBERTa - Tokenizer: BPE (Byte Pair Encoding) - Format: ONNX - Original Model: serdarcaglar/roberta-base-turkish-scientific-cased ## Performance and Limitations - Optimized for scientific domain in Turkish - Not tested for general domain text - ONNX format optimized for inference ## Requirements - onnxruntime - transformers - torch ## License and Usage - Follow original model license - Users responsible for compliance ## Citation ```bibtex @misc{caglar2024roberta, author = {Çağlar, Serdar}, title = {Roberta-base-turkish-scientific-cased}, year = {2024}, publisher = {HuggingFace}, url = {https://huggingface.co/serdarcaglar/roberta-base-turkish-scientific-cased} } ``` ## Contact Serdar ÇAĞLAR ([email protected])
null
TBD
# Model Card: Turkish Scientific RoBERTa ONNX ## Model Description ONNX version of roberta-base-turkish-scientific-cased, specialized for Turkish scientific text analysis. ## Intended Use - Scientific text analysis in Turkish - Text comprehension - Fill-mask predictions - Scientific text summarization ## Training Data - Source: Turkish scientific article abstracts from trdizin, yöktez, and t.k. - Training Duration: 3+ days - Steps: 2M - Built from scratch, no fine-tuning ## Technical Specifications - Base Architecture: RoBERTa - Tokenizer: BPE (Byte Pair Encoding) - Format: ONNX - Original Model: serdarcaglar/roberta-base-turkish-scientific-cased ## Performance and Limitations - Optimized for scientific domain in Turkish - Not tested for general domain text - ONNX format optimized for inference ## Requirements - onnxruntime - transformers - torch ## License and Usage - Follow original model license - Users responsible for compliance ## Citation ```bibtex @misc{caglar2024roberta, author = {Çağlar, Serdar}, title = {Roberta-base-turkish-scientific-cased}, year = {2024}, publisher = {HuggingFace}, url = {https://huggingface.co/serdarcaglar/roberta-base-turkish-scientific-cased} } ``` ## Contact Serdar ÇAĞLAR ([email protected])
{"language": ["tr"]}
task
[ "SUMMARIZATION" ]
46,035
Esmail-AGumaan/LlTRA-model
Esmail-AGumaan
null
[ "dataset:opus_books", "license:mit", "region:us" ]
2024-03-15T11:19:03Z
2024-03-15T11:38:32+00:00
0
2
--- datasets: - opus_books license: mit --- LlTRA stands for: Language to Language Transformer model from the paper "Attention is all you Need", building transformer model:Transformer model from scratch and using it for translation using pytorch. --- Problem Statement: In the rapidly evolving landscape of natural language processing (NLP) and machine translation, there exists a persistent challenge in achieving accurate and contextually rich language-to-language transformations. Existing models often struggle with capturing nuanced semantic meanings, context preservation, and maintaining grammatical coherence across different languages. Additionally, the demand for efficient cross-lingual communication and content generation has underscored the need for a versatile language transformer model that can seamlessly navigate the intricacies of diverse linguistic structures. --- Goal: Develop a specialized language-to-language transformer model that accurately translates from the Arabic language to the English language, ensuring semantic fidelity, contextual awareness, cross-lingual adaptability, and the retention of grammar and style. The model should provide efficient training and inference processes to make it practical and accessible for a wide range of applications, ultimately contributing to the advancement of Arabic-to-English language translation capabilities. --- Dataset used: from hugging Face huggingface/opus_infopankki --- Configuration: this is the settings of the model, You can customize the source and target languages, sequence lengths for each, the number of epochs, batch size, and more. ```python def Get_configuration(): return { "batch_size": 8, "num_epochs": 30, "lr": 10**-4, "sequence_length": 100, "d_model": 512, "datasource": 'opus_infopankki', "source_language": "ar", "target_language": "en", "model_folder": "weights", "model_basename": "tmodel_", "preload": "latest", "tokenizer_file": "tokenizer_{0}.json", "experiment_name": "runs/tmodel" } ``` --- Training: I used my drive to upload the project and then connected it to the Google Collab to train it: - hours of training: 4 hours. - epochs: 20. - number of dataset rows: 2,934,399. - size of the dataset: 95MB. - size of the auto-converted parquet files: 153MB. - Arabic tokens: 29999. - English tokens: 15697. - pre-trained model in collab. - BLEU score from Arabic to English: 19.7 ---
null
Non_BioNLP
LlTRA stands for: Language to Language Transformer model from the paper "Attention is all you Need", building transformer model:Transformer model from scratch and using it for translation using pytorch. --- Problem Statement: In the rapidly evolving landscape of natural language processing (NLP) and machine translation, there exists a persistent challenge in achieving accurate and contextually rich language-to-language transformations. Existing models often struggle with capturing nuanced semantic meanings, context preservation, and maintaining grammatical coherence across different languages. Additionally, the demand for efficient cross-lingual communication and content generation has underscored the need for a versatile language transformer model that can seamlessly navigate the intricacies of diverse linguistic structures. --- Goal: Develop a specialized language-to-language transformer model that accurately translates from the Arabic language to the English language, ensuring semantic fidelity, contextual awareness, cross-lingual adaptability, and the retention of grammar and style. The model should provide efficient training and inference processes to make it practical and accessible for a wide range of applications, ultimately contributing to the advancement of Arabic-to-English language translation capabilities. --- Dataset used: from hugging Face huggingface/opus_infopankki --- Configuration: this is the settings of the model, You can customize the source and target languages, sequence lengths for each, the number of epochs, batch size, and more. ```python def Get_configuration(): return { "batch_size": 8, "num_epochs": 30, "lr": 10**-4, "sequence_length": 100, "d_model": 512, "datasource": 'opus_infopankki', "source_language": "ar", "target_language": "en", "model_folder": "weights", "model_basename": "tmodel_", "preload": "latest", "tokenizer_file": "tokenizer_{0}.json", "experiment_name": "runs/tmodel" } ``` --- Training: I used my drive to upload the project and then connected it to the Google Collab to train it: - hours of training: 4 hours. - epochs: 20. - number of dataset rows: 2,934,399. - size of the dataset: 95MB. - size of the auto-converted parquet files: 153MB. - Arabic tokens: 29999. - English tokens: 15697. - pre-trained model in collab. - BLEU score from Arabic to English: 19.7 ---
{"datasets": ["opus_books"], "license": "mit"}
task
[ "TRANSLATION" ]
46,036
LoneStriker/Master-Yi-9B-8.0bpw-h8-exl2
LoneStriker
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "exl2", "region:us" ]
2024-05-18T19:44:12Z
2024-05-18T19:47:45+00:00
7
2
--- license: apache-2.0 --- ## Model Description Master is a collection of LLMs trained using human-collected seed questions and regenerate the answers with a mixture of high performance Open-source LLMs. **Master-Yi-9B** is trained using the ORPO technique. The model shows strong abilities in reasoning on coding and math questions. **Quantized Version**: [Here](https://huggingface.co/qnguyen3/Master-Yi-9B-GGUF) **Master-Yi-9B-Vision**: **Coming Soon** ![img](https://huggingface.co/qnguyen3/Master-Yi-9B/resolve/main/Master-Yi-9B.webp) ## Prompt Template ``` <|im_start|>system You are a helpful AI assistant.<|im_end|> <|im_start|>user What is the meaning of life?<|im_end|> <|im_start|>assistant ``` ## Examples ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630430583926de1f7ec62c6b/E27JmdRAMrHQacM50-lBk.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630430583926de1f7ec62c6b/z0HS4bxHFQzPe0gZlvCzZ.png) ## Inference Code ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "vilm/VinaLlama2-14B", torch_dtype='auto', device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("vilm/VinaLlama2-14B") prompt = "What is the mearning of life?" messages = [ {"role": "system", "content": "You are a helpful AI assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=1024, eos_token_id=tokenizer.eos_token_id, temperature=0.25, ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids)[0] print(response) ``` ## Benchmarks ### Nous Benchmark: | Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average| |---------------------------------------------------|------:|------:|---------:|-------:|------:| |[Master-Yi-9B](https://huggingface.co/qnguyen3/Master-Yi-9B)| 43.55| 71.48| 48.54| 41.43| 51.25| ### AGIEval ``` | Task |Version| Metric |Value| |Stderr| |------------------------------|------:|--------|----:|---|-----:| |agieval_aqua_rat | 0|acc |35.83|± | 3.01| | | |acc_norm|31.89|± | 2.93| |agieval_logiqa_en | 0|acc |38.25|± | 1.91| | | |acc_norm|37.79|± | 1.90| |agieval_lsat_ar | 0|acc |23.04|± | 2.78| | | |acc_norm|20.43|± | 2.66| |agieval_lsat_lr | 0|acc |48.04|± | 2.21| | | |acc_norm|42.75|± | 2.19| |agieval_lsat_rc | 0|acc |61.34|± | 2.97| | | |acc_norm|52.79|± | 3.05| |agieval_sat_en | 0|acc |79.13|± | 2.84| | | |acc_norm|72.33|± | 3.12| |agieval_sat_en_without_passage| 0|acc |44.17|± | 3.47| | | |acc_norm|42.72|± | 3.45| |agieval_sat_math | 0|acc |52.27|± | 3.38| | | |acc_norm|47.73|± | 3.38| Average: 43.55% ``` ### GPT4All ``` | Task |Version| Metric |Value| |Stderr| |-------------|------:|--------|----:|---|-----:| |arc_challenge| 0|acc |54.95|± | 1.45| | | |acc_norm|58.70|± | 1.44| |arc_easy | 0|acc |82.28|± | 0.78| | | |acc_norm|81.10|± | 0.80| |boolq | 1|acc |86.15|± | 0.60| |hellaswag | 0|acc |59.16|± | 0.49| | | |acc_norm|77.53|± | 0.42| |openbookqa | 0|acc |37.40|± | 2.17| | | |acc_norm|44.00|± | 2.22| |piqa | 0|acc |79.00|± | 0.95| | | |acc_norm|80.25|± | 0.93| |winogrande | 0|acc |72.61|± | 1.25| Average: 71.48% ``` ### TruthfulQA ``` | Task |Version|Metric|Value| |Stderr| |-------------|------:|------|----:|---|-----:| |truthfulqa_mc| 1|mc1 |33.05|± | 1.65| | | |mc2 |48.54|± | 1.54| Average: 48.54% ``` ### Bigbench ``` | Task |Version| Metric |Value| |Stderr| |------------------------------------------------|------:|---------------------|----:|---|-----:| |bigbench_causal_judgement | 0|multiple_choice_grade|54.74|± | 3.62| |bigbench_date_understanding | 0|multiple_choice_grade|68.02|± | 2.43| |bigbench_disambiguation_qa | 0|multiple_choice_grade|40.31|± | 3.06| |bigbench_geometric_shapes | 0|multiple_choice_grade|30.36|± | 2.43| | | |exact_str_match | 2.23|± | 0.78| |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|26.00|± | 1.96| |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|20.71|± | 1.53| |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|44.00|± | 2.87| |bigbench_movie_recommendation | 0|multiple_choice_grade|35.00|± | 2.14| |bigbench_navigate | 0|multiple_choice_grade|58.40|± | 1.56| |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|61.80|± | 1.09| |bigbench_ruin_names | 0|multiple_choice_grade|42.41|± | 2.34| |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|31.56|± | 1.47| |bigbench_snarks | 0|multiple_choice_grade|55.25|± | 3.71| |bigbench_sports_understanding | 0|multiple_choice_grade|69.37|± | 1.47| |bigbench_temporal_sequences | 0|multiple_choice_grade|27.70|± | 1.42| |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|21.36|± | 1.16| |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|14.69|± | 0.85| |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|44.00|± | 2.87| Average: 41.43% ``` **Average score**: 51.25% ### OpenLLM Benchmark: | Model |ARC |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K|Average| |---------------------------------------------------|---:|--------:|----:|---------:|---------:|----:|------:| |[Master-Yi-9B](https://huggingface.co/qnguyen3/Master-Yi-9B)|61.6| 79.89|69.95| 48.59| 77.35|67.48| 67.48| ### ARC ``` | Task |Version| Metric | Value | |Stderr| |-------------|------:|--------------------|-------------|---|------| |arc_challenge| 1|acc,none | 0.59| | | | | |acc_stderr,none | 0.01| | | | | |acc_norm,none | 0.62| | | | | |acc_norm_stderr,none| 0.01| | | | | |alias |arc_challenge| | | Average: 61.6% ``` ### HellaSwag ``` | Task |Version| Metric | Value | |Stderr| |---------|------:|--------------------|---------|---|------| |hellaswag| 1|acc,none | 0.61| | | | | |acc_stderr,none | 0| | | | | |acc_norm,none | 0.80| | | | | |acc_norm_stderr,none| 0| | | | | |alias |hellaswag| | | Average: 79.89% ``` ### MMLU ``` | Task |Version| Metric | Value | |Stderr| |----------------------------------------|-------|---------------|---------------------------------------|---|------| |mmlu |N/A |acc,none | 0.7| | | | | |acc_stderr,none| 0| | | | | |alias |mmlu | | | |mmlu_abstract_algebra | 0|alias | - abstract_algebra | | | | | |acc,none |0.46 | | | | | |acc_stderr,none|0.05 | | | |mmlu_anatomy | 0|alias | - anatomy | | | | | |acc,none |0.64 | | | | | |acc_stderr,none|0.04 | | | |mmlu_astronomy | 0|alias | - astronomy | | | | | |acc,none |0.77 | | | | | |acc_stderr,none|0.03 | | | |mmlu_business_ethics | 0|alias | - business_ethics | | | | | |acc,none |0.76 | | | | | |acc_stderr,none|0.04 | | | |mmlu_clinical_knowledge | 0|alias | - clinical_knowledge | | | | | |acc,none |0.71 | | | | | |acc_stderr,none|0.03 | | | |mmlu_college_biology | 0|alias | - college_biology | | | | | |acc,none |0.82 | | | | | |acc_stderr,none|0.03 | | | |mmlu_college_chemistry | 0|alias | - college_chemistry | | | | | |acc,none |0.52 | | | | | |acc_stderr,none|0.05 | | | |mmlu_college_computer_science | 0|alias | - college_computer_science | | | | | |acc,none |0.56 | | | | | |acc_stderr,none|0.05 | | | |mmlu_college_mathematics | 0|alias | - college_mathematics | | | | | |acc,none |0.44 | | | | | |acc_stderr,none|0.05 | | | |mmlu_college_medicine | 0|alias | - college_medicine | | | | | |acc,none |0.72 | | | | | |acc_stderr,none|0.03 | | | |mmlu_college_physics | 0|alias | - college_physics | | | | | |acc,none |0.45 | | | | | |acc_stderr,none|0.05 | | | |mmlu_computer_security | 0|alias | - computer_security | | | | | |acc,none |0.81 | | | | | |acc_stderr,none|0.04 | | | |mmlu_conceptual_physics | 0|alias | - conceptual_physics | | | | | |acc,none |0.74 | | | | | |acc_stderr,none|0.03 | | | |mmlu_econometrics | 0|alias | - econometrics | | | | | |acc,none |0.65 | | | | | |acc_stderr,none|0.04 | | | |mmlu_electrical_engineering | 0|alias | - electrical_engineering | | | | | |acc,none |0.72 | | | | | |acc_stderr,none|0.04 | | | |mmlu_elementary_mathematics | 0|alias | - elementary_mathematics | | | | | |acc,none |0.62 | | | | | |acc_stderr,none|0.02 | | | |mmlu_formal_logic | 0|alias | - formal_logic | | | | | |acc,none |0.57 | | | | | |acc_stderr,none|0.04 | | | |mmlu_global_facts | 0|alias | - global_facts | | | | | |acc,none |0.46 | | | | | |acc_stderr,none|0.05 | | | |mmlu_high_school_biology | 0|alias | - high_school_biology | | | | | |acc,none |0.86 | | | | | |acc_stderr,none|0.02 | | | |mmlu_high_school_chemistry | 0|alias | - high_school_chemistry | | | | | |acc,none |0.67 | | | | | |acc_stderr,none|0.03 | | | |mmlu_high_school_computer_science | 0|alias | - high_school_computer_science | | | | | |acc,none |0.84 | | | | | |acc_stderr,none|0.04 | | | |mmlu_high_school_european_history | 0|alias | - high_school_european_history | | | | | |acc,none |0.82 | | | | | |acc_stderr,none|0.03 | | | |mmlu_high_school_geography | 0|alias | - high_school_geography | | | | | |acc,none |0.86 | | | | | |acc_stderr,none|0.02 | | | |mmlu_high_school_government_and_politics| 0|alias | - high_school_government_and_politics| | | | | |acc,none |0.90 | | | | | |acc_stderr,none|0.02 | | | |mmlu_high_school_macroeconomics | 0|alias | - high_school_macroeconomics | | | | | |acc,none |0.75 | | | | | |acc_stderr,none|0.02 | | | |mmlu_high_school_mathematics | 0|alias | - high_school_mathematics | | | | | |acc,none |0.43 | | | | | |acc_stderr,none|0.03 | | | |mmlu_high_school_microeconomics | 0|alias | - high_school_microeconomics | | | | | |acc,none |0.86 | | | | | |acc_stderr,none|0.02 | | | |mmlu_high_school_physics | 0|alias | - high_school_physics | | | | | |acc,none |0.45 | | | | | |acc_stderr,none|0.04 | | | |mmlu_high_school_psychology | 0|alias | - high_school_psychology | | | | | |acc,none |0.87 | | | | | |acc_stderr,none|0.01 | | | |mmlu_high_school_statistics | 0|alias | - high_school_statistics | | | | | |acc,none |0.68 | | | | | |acc_stderr,none|0.03 | | | |mmlu_high_school_us_history | 0|alias | - high_school_us_history | | | | | |acc,none |0.85 | | | | | |acc_stderr,none|0.02 | | | |mmlu_high_school_world_history | 0|alias | - high_school_world_history | | | | | |acc,none |0.85 | | | | | |acc_stderr,none|0.02 | | | |mmlu_human_aging | 0|alias | - human_aging | | | | | |acc,none |0.76 | | | | | |acc_stderr,none|0.03 | | | |mmlu_human_sexuality | 0|alias | - human_sexuality | | | | | |acc,none |0.78 | | | | | |acc_stderr,none|0.04 | | | |mmlu_humanities |N/A |alias | - humanities | | | | | |acc,none |0.63 | | | | | |acc_stderr,none|0.01 | | | |mmlu_international_law | 0|alias | - international_law | | | | | |acc,none |0.79 | | | | | |acc_stderr,none|0.04 | | | |mmlu_jurisprudence | 0|alias | - jurisprudence | | | | | |acc,none |0.79 | | | | | |acc_stderr,none|0.04 | | | |mmlu_logical_fallacies | 0|alias | - logical_fallacies | | | | | |acc,none |0.80 | | | | | |acc_stderr,none|0.03 | | | |mmlu_machine_learning | 0|alias | - machine_learning | | | | | |acc,none |0.52 | | | | | |acc_stderr,none|0.05 | | | |mmlu_management | 0|alias | - management | | | | | |acc,none |0.83 | | | | | |acc_stderr,none|0.04 | | | |mmlu_marketing | 0|alias | - marketing | | | | | |acc,none |0.89 | | | | | |acc_stderr,none|0.02 | | | |mmlu_medical_genetics | 0|alias | - medical_genetics | | | | | |acc,none |0.78 | | | | | |acc_stderr,none|0.04 | | | |mmlu_miscellaneous | 0|alias | - miscellaneous | | | | | |acc,none |0.85 | | | | | |acc_stderr,none|0.01 | | | |mmlu_moral_disputes | 0|alias | - moral_disputes | | | | | |acc,none |0.75 | | | | | |acc_stderr,none|0.02 | | | |mmlu_moral_scenarios | 0|alias | - moral_scenarios | | | | | |acc,none |0.48 | | | | | |acc_stderr,none|0.02 | | | |mmlu_nutrition | 0|alias | - nutrition | | | | | |acc,none |0.77 | | | | | |acc_stderr,none|0.02 | | | |mmlu_other |N/A |alias | - other | | | | | |acc,none |0.75 | | | | | |acc_stderr,none|0.01 | | | |mmlu_philosophy | 0|alias | - philosophy | | | | | |acc,none |0.78 | | | | | |acc_stderr,none|0.02 | | | |mmlu_prehistory | 0|alias | - prehistory | | | | | |acc,none |0.77 | | | | | |acc_stderr,none|0.02 | | | |mmlu_professional_accounting | 0|alias | - professional_accounting | | | | | |acc,none |0.57 | | | | | |acc_stderr,none|0.03 | | | |mmlu_professional_law | 0|alias | - professional_law | | | | | |acc,none |0.50 | | | | | |acc_stderr,none|0.01 | | | |mmlu_professional_medicine | 0|alias | - professional_medicine | | | | | |acc,none |0.71 | | | | | |acc_stderr,none|0.03 | | | |mmlu_professional_psychology | 0|alias | - professional_psychology | | | | | |acc,none |0.73 | | | | | |acc_stderr,none|0.02 | | | |mmlu_public_relations | 0|alias | - public_relations | | | | | |acc,none |0.76 | | | | | |acc_stderr,none|0.04 | | | |mmlu_security_studies | 0|alias | - security_studies | | | | | |acc,none |0.78 | | | | | |acc_stderr,none|0.03 | | | |mmlu_social_sciences |N/A |alias | - social_sciences | | | | | |acc,none |0.81 | | | | | |acc_stderr,none|0.01 | | | |mmlu_sociology | 0|alias | - sociology | | | | | |acc,none |0.86 | | | | | |acc_stderr,none|0.02 | | | |mmlu_stem |N/A |alias | - stem | | | | | |acc,none |0.65 | | | | | |acc_stderr,none|0.01 | | | |mmlu_us_foreign_policy | 0|alias | - us_foreign_policy | | | | | |acc,none |0.92 | | | | | |acc_stderr,none|0.03 | | | |mmlu_virology | 0|alias | - virology | | | | | |acc,none |0.58 | | | | | |acc_stderr,none|0.04 | | | |mmlu_world_religions | 0|alias | - world_religions | | | | | |acc,none |0.82 | | | | | |acc_stderr,none|0.03 | | | Average: 69.95% ``` ### TruthfulQA ``` | Task |Version| Metric | Value | |Stderr| |--------------|-------|-----------------------|-----------------|---|------| |truthfulqa |N/A |bleu_acc,none | 0.45| | | | | |bleu_acc_stderr,none | 0.02| | | | | |rouge1_acc,none | 0.45| | | | | |rouge1_acc_stderr,none | 0.02| | | | | |rouge2_diff,none | 0.92| | | | | |rouge2_diff_stderr,none| 1.07| | | | | |bleu_max,none | 23.77| | | | | |bleu_max_stderr,none | 0.81| | | | | |rouge2_acc,none | 0.38| | | | | |rouge2_acc_stderr,none | 0.02| | | | | |acc,none | 0.41| | | | | |acc_stderr,none | 0.01| | | | | |rougeL_diff,none | 1.57| | | | | |rougeL_diff_stderr,none| 0.93| | | | | |rougeL_acc,none | 0.46| | | | | |rougeL_acc_stderr,none | 0.02| | | | | |bleu_diff,none | 1.38| | | | | |bleu_diff_stderr,none | 0.75| | | | | |rouge2_max,none | 33.01| | | | | |rouge2_max_stderr,none | 1.05| | | | | |rouge1_diff,none | 1.72| | | | | |rouge1_diff_stderr,none| 0.92| | | | | |rougeL_max,none | 45.25| | | | | |rougeL_max_stderr,none | 0.92| | | | | |rouge1_max,none | 48.29| | | | | |rouge1_max_stderr,none | 0.90| | | | | |alias |truthfulqa | | | |truthfulqa_gen| 3|bleu_max,none | 23.77| | | | | |bleu_max_stderr,none | 0.81| | | | | |bleu_acc,none | 0.45| | | | | |bleu_acc_stderr,none | 0.02| | | | | |bleu_diff,none | 1.38| | | | | |bleu_diff_stderr,none | 0.75| | | | | |rouge1_max,none | 48.29| | | | | |rouge1_max_stderr,none | 0.90| | | | | |rouge1_acc,none | 0.45| | | | | |rouge1_acc_stderr,none | 0.02| | | | | |rouge1_diff,none | 1.72| | | | | |rouge1_diff_stderr,none| 0.92| | | | | |rouge2_max,none | 33.01| | | | | |rouge2_max_stderr,none | 1.05| | | | | |rouge2_acc,none | 0.38| | | | | |rouge2_acc_stderr,none | 0.02| | | | | |rouge2_diff,none | 0.92| | | | | |rouge2_diff_stderr,none| 1.07| | | | | |rougeL_max,none | 45.25| | | | | |rougeL_max_stderr,none | 0.92| | | | | |rougeL_acc,none | 0.46| | | | | |rougeL_acc_stderr,none | 0.02| | | | | |rougeL_diff,none | 1.57| | | | | |rougeL_diff_stderr,none| 0.93| | | | | |alias | - truthfulqa_gen| | | |truthfulqa_mc1| 2|acc,none | 0.33| | | | | |acc_stderr,none | 0.02| | | | | |alias | - truthfulqa_mc1| | | |truthfulqa_mc2| 2|acc,none | 0.49| | | | | |acc_stderr,none | 0.02| | | | | |alias | - truthfulqa_mc2| | | Average: 48.59% ``` ### Winogrande ``` | Task |Version| Metric | Value | |Stderr| |----------|------:|---------------|----------|---|------| |winogrande| 1|acc,none | 0.77| | | | | |acc_stderr,none| 0.01| | | | | |alias |winogrande| | | Average: 77.35% ``` ### GSM8K ``` |Task |Version| Metric |Value| |Stderr| |-----|------:|-----------------------------------|-----|---|------| |gsm8k| 3|exact_match,strict-match | 0.67| | | | | |exact_match_stderr,strict-match | 0.01| | | | | |exact_match,flexible-extract | 0.68| | | | | |exact_match_stderr,flexible-extract| 0.01| | | | | |alias |gsm8k| | | Average: 67.48% ``` **Average score**: 67.48%
null
TBD
## Model Description Master is a collection of LLMs trained using human-collected seed questions and regenerate the answers with a mixture of high performance Open-source LLMs. **Master-Yi-9B** is trained using the ORPO technique. The model shows strong abilities in reasoning on coding and math questions. **Quantized Version**: [Here](https://huggingface.co/qnguyen3/Master-Yi-9B-GGUF) **Master-Yi-9B-Vision**: **Coming Soon** ![img](https://huggingface.co/qnguyen3/Master-Yi-9B/resolve/main/Master-Yi-9B.webp) ## Prompt Template ``` <|im_start|>system You are a helpful AI assistant.<|im_end|> <|im_start|>user What is the meaning of life?<|im_end|> <|im_start|>assistant ``` ## Examples ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630430583926de1f7ec62c6b/E27JmdRAMrHQacM50-lBk.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630430583926de1f7ec62c6b/z0HS4bxHFQzPe0gZlvCzZ.png) ## Inference Code ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "vilm/VinaLlama2-14B", torch_dtype='auto', device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("vilm/VinaLlama2-14B") prompt = "What is the mearning of life?" messages = [ {"role": "system", "content": "You are a helpful AI assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=1024, eos_token_id=tokenizer.eos_token_id, temperature=0.25, ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids)[0] print(response) ``` ## Benchmarks ### Nous Benchmark: | Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average| |---------------------------------------------------|------:|------:|---------:|-------:|------:| |[Master-Yi-9B](https://huggingface.co/qnguyen3/Master-Yi-9B)| 43.55| 71.48| 48.54| 41.43| 51.25| ### AGIEval ``` | Task |Version| Metric |Value| |Stderr| |------------------------------|------:|--------|----:|---|-----:| |agieval_aqua_rat | 0|acc |35.83|± | 3.01| | | |acc_norm|31.89|± | 2.93| |agieval_logiqa_en | 0|acc |38.25|± | 1.91| | | |acc_norm|37.79|± | 1.90| |agieval_lsat_ar | 0|acc |23.04|± | 2.78| | | |acc_norm|20.43|± | 2.66| |agieval_lsat_lr | 0|acc |48.04|± | 2.21| | | |acc_norm|42.75|± | 2.19| |agieval_lsat_rc | 0|acc |61.34|± | 2.97| | | |acc_norm|52.79|± | 3.05| |agieval_sat_en | 0|acc |79.13|± | 2.84| | | |acc_norm|72.33|± | 3.12| |agieval_sat_en_without_passage| 0|acc |44.17|± | 3.47| | | |acc_norm|42.72|± | 3.45| |agieval_sat_math | 0|acc |52.27|± | 3.38| | | |acc_norm|47.73|± | 3.38| Average: 43.55% ``` ### GPT4All ``` | Task |Version| Metric |Value| |Stderr| |-------------|------:|--------|----:|---|-----:| |arc_challenge| 0|acc |54.95|± | 1.45| | | |acc_norm|58.70|± | 1.44| |arc_easy | 0|acc |82.28|± | 0.78| | | |acc_norm|81.10|± | 0.80| |boolq | 1|acc |86.15|± | 0.60| |hellaswag | 0|acc |59.16|± | 0.49| | | |acc_norm|77.53|± | 0.42| |openbookqa | 0|acc |37.40|± | 2.17| | | |acc_norm|44.00|± | 2.22| |piqa | 0|acc |79.00|± | 0.95| | | |acc_norm|80.25|± | 0.93| |winogrande | 0|acc |72.61|± | 1.25| Average: 71.48% ``` ### TruthfulQA ``` | Task |Version|Metric|Value| |Stderr| |-------------|------:|------|----:|---|-----:| |truthfulqa_mc| 1|mc1 |33.05|± | 1.65| | | |mc2 |48.54|± | 1.54| Average: 48.54% ``` ### Bigbench ``` | Task |Version| Metric |Value| |Stderr| |------------------------------------------------|------:|---------------------|----:|---|-----:| |bigbench_causal_judgement | 0|multiple_choice_grade|54.74|± | 3.62| |bigbench_date_understanding | 0|multiple_choice_grade|68.02|± | 2.43| |bigbench_disambiguation_qa | 0|multiple_choice_grade|40.31|± | 3.06| |bigbench_geometric_shapes | 0|multiple_choice_grade|30.36|± | 2.43| | | |exact_str_match | 2.23|± | 0.78| |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|26.00|± | 1.96| |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|20.71|± | 1.53| |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|44.00|± | 2.87| |bigbench_movie_recommendation | 0|multiple_choice_grade|35.00|± | 2.14| |bigbench_navigate | 0|multiple_choice_grade|58.40|± | 1.56| |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|61.80|± | 1.09| |bigbench_ruin_names | 0|multiple_choice_grade|42.41|± | 2.34| |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|31.56|± | 1.47| |bigbench_snarks | 0|multiple_choice_grade|55.25|± | 3.71| |bigbench_sports_understanding | 0|multiple_choice_grade|69.37|± | 1.47| |bigbench_temporal_sequences | 0|multiple_choice_grade|27.70|± | 1.42| |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|21.36|± | 1.16| |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|14.69|± | 0.85| |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|44.00|± | 2.87| Average: 41.43% ``` **Average score**: 51.25% ### OpenLLM Benchmark: | Model |ARC |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K|Average| |---------------------------------------------------|---:|--------:|----:|---------:|---------:|----:|------:| |[Master-Yi-9B](https://huggingface.co/qnguyen3/Master-Yi-9B)|61.6| 79.89|69.95| 48.59| 77.35|67.48| 67.48| ### ARC ``` | Task |Version| Metric | Value | |Stderr| |-------------|------:|--------------------|-------------|---|------| |arc_challenge| 1|acc,none | 0.59| | | | | |acc_stderr,none | 0.01| | | | | |acc_norm,none | 0.62| | | | | |acc_norm_stderr,none| 0.01| | | | | |alias |arc_challenge| | | Average: 61.6% ``` ### HellaSwag ``` | Task |Version| Metric | Value | |Stderr| |---------|------:|--------------------|---------|---|------| |hellaswag| 1|acc,none | 0.61| | | | | |acc_stderr,none | 0| | | | | |acc_norm,none | 0.80| | | | | |acc_norm_stderr,none| 0| | | | | |alias |hellaswag| | | Average: 79.89% ``` ### MMLU ``` | Task |Version| Metric | Value | |Stderr| |----------------------------------------|-------|---------------|---------------------------------------|---|------| |mmlu |N/A |acc,none | 0.7| | | | | |acc_stderr,none| 0| | | | | |alias |mmlu | | | |mmlu_abstract_algebra | 0|alias | - abstract_algebra | | | | | |acc,none |0.46 | | | | | |acc_stderr,none|0.05 | | | |mmlu_anatomy | 0|alias | - anatomy | | | | | |acc,none |0.64 | | | | | |acc_stderr,none|0.04 | | | |mmlu_astronomy | 0|alias | - astronomy | | | | | |acc,none |0.77 | | | | | |acc_stderr,none|0.03 | | | |mmlu_business_ethics | 0|alias | - business_ethics | | | | | |acc,none |0.76 | | | | | |acc_stderr,none|0.04 | | | |mmlu_clinical_knowledge | 0|alias | - clinical_knowledge | | | | | |acc,none |0.71 | | | | | |acc_stderr,none|0.03 | | | |mmlu_college_biology | 0|alias | - college_biology | | | | | |acc,none |0.82 | | | | | |acc_stderr,none|0.03 | | | |mmlu_college_chemistry | 0|alias | - college_chemistry | | | | | |acc,none |0.52 | | | | | |acc_stderr,none|0.05 | | | |mmlu_college_computer_science | 0|alias | - college_computer_science | | | | | |acc,none |0.56 | | | | | |acc_stderr,none|0.05 | | | |mmlu_college_mathematics | 0|alias | - college_mathematics | | | | | |acc,none |0.44 | | | | | |acc_stderr,none|0.05 | | | |mmlu_college_medicine | 0|alias | - college_medicine | | | | | |acc,none |0.72 | | | | | |acc_stderr,none|0.03 | | | |mmlu_college_physics | 0|alias | - college_physics | | | | | |acc,none |0.45 | | | | | |acc_stderr,none|0.05 | | | |mmlu_computer_security | 0|alias | - computer_security | | | | | |acc,none |0.81 | | | | | |acc_stderr,none|0.04 | | | |mmlu_conceptual_physics | 0|alias | - conceptual_physics | | | | | |acc,none |0.74 | | | | | |acc_stderr,none|0.03 | | | |mmlu_econometrics | 0|alias | - econometrics | | | | | |acc,none |0.65 | | | | | |acc_stderr,none|0.04 | | | |mmlu_electrical_engineering | 0|alias | - electrical_engineering | | | | | |acc,none |0.72 | | | | | |acc_stderr,none|0.04 | | | |mmlu_elementary_mathematics | 0|alias | - elementary_mathematics | | | | | |acc,none |0.62 | | | | | |acc_stderr,none|0.02 | | | |mmlu_formal_logic | 0|alias | - formal_logic | | | | | |acc,none |0.57 | | | | | |acc_stderr,none|0.04 | | | |mmlu_global_facts | 0|alias | - global_facts | | | | | |acc,none |0.46 | | | | | |acc_stderr,none|0.05 | | | |mmlu_high_school_biology | 0|alias | - high_school_biology | | | | | |acc,none |0.86 | | | | | |acc_stderr,none|0.02 | | | |mmlu_high_school_chemistry | 0|alias | - high_school_chemistry | | | | | |acc,none |0.67 | | | | | |acc_stderr,none|0.03 | | | |mmlu_high_school_computer_science | 0|alias | - high_school_computer_science | | | | | |acc,none |0.84 | | | | | |acc_stderr,none|0.04 | | | |mmlu_high_school_european_history | 0|alias | - high_school_european_history | | | | | |acc,none |0.82 | | | | | |acc_stderr,none|0.03 | | | |mmlu_high_school_geography | 0|alias | - high_school_geography | | | | | |acc,none |0.86 | | | | | |acc_stderr,none|0.02 | | | |mmlu_high_school_government_and_politics| 0|alias | - high_school_government_and_politics| | | | | |acc,none |0.90 | | | | | |acc_stderr,none|0.02 | | | |mmlu_high_school_macroeconomics | 0|alias | - high_school_macroeconomics | | | | | |acc,none |0.75 | | | | | |acc_stderr,none|0.02 | | | |mmlu_high_school_mathematics | 0|alias | - high_school_mathematics | | | | | |acc,none |0.43 | | | | | |acc_stderr,none|0.03 | | | |mmlu_high_school_microeconomics | 0|alias | - high_school_microeconomics | | | | | |acc,none |0.86 | | | | | |acc_stderr,none|0.02 | | | |mmlu_high_school_physics | 0|alias | - high_school_physics | | | | | |acc,none |0.45 | | | | | |acc_stderr,none|0.04 | | | |mmlu_high_school_psychology | 0|alias | - high_school_psychology | | | | | |acc,none |0.87 | | | | | |acc_stderr,none|0.01 | | | |mmlu_high_school_statistics | 0|alias | - high_school_statistics | | | | | |acc,none |0.68 | | | | | |acc_stderr,none|0.03 | | | |mmlu_high_school_us_history | 0|alias | - high_school_us_history | | | | | |acc,none |0.85 | | | | | |acc_stderr,none|0.02 | | | |mmlu_high_school_world_history | 0|alias | - high_school_world_history | | | | | |acc,none |0.85 | | | | | |acc_stderr,none|0.02 | | | |mmlu_human_aging | 0|alias | - human_aging | | | | | |acc,none |0.76 | | | | | |acc_stderr,none|0.03 | | | |mmlu_human_sexuality | 0|alias | - human_sexuality | | | | | |acc,none |0.78 | | | | | |acc_stderr,none|0.04 | | | |mmlu_humanities |N/A |alias | - humanities | | | | | |acc,none |0.63 | | | | | |acc_stderr,none|0.01 | | | |mmlu_international_law | 0|alias | - international_law | | | | | |acc,none |0.79 | | | | | |acc_stderr,none|0.04 | | | |mmlu_jurisprudence | 0|alias | - jurisprudence | | | | | |acc,none |0.79 | | | | | |acc_stderr,none|0.04 | | | |mmlu_logical_fallacies | 0|alias | - logical_fallacies | | | | | |acc,none |0.80 | | | | | |acc_stderr,none|0.03 | | | |mmlu_machine_learning | 0|alias | - machine_learning | | | | | |acc,none |0.52 | | | | | |acc_stderr,none|0.05 | | | |mmlu_management | 0|alias | - management | | | | | |acc,none |0.83 | | | | | |acc_stderr,none|0.04 | | | |mmlu_marketing | 0|alias | - marketing | | | | | |acc,none |0.89 | | | | | |acc_stderr,none|0.02 | | | |mmlu_medical_genetics | 0|alias | - medical_genetics | | | | | |acc,none |0.78 | | | | | |acc_stderr,none|0.04 | | | |mmlu_miscellaneous | 0|alias | - miscellaneous | | | | | |acc,none |0.85 | | | | | |acc_stderr,none|0.01 | | | |mmlu_moral_disputes | 0|alias | - moral_disputes | | | | | |acc,none |0.75 | | | | | |acc_stderr,none|0.02 | | | |mmlu_moral_scenarios | 0|alias | - moral_scenarios | | | | | |acc,none |0.48 | | | | | |acc_stderr,none|0.02 | | | |mmlu_nutrition | 0|alias | - nutrition | | | | | |acc,none |0.77 | | | | | |acc_stderr,none|0.02 | | | |mmlu_other |N/A |alias | - other | | | | | |acc,none |0.75 | | | | | |acc_stderr,none|0.01 | | | |mmlu_philosophy | 0|alias | - philosophy | | | | | |acc,none |0.78 | | | | | |acc_stderr,none|0.02 | | | |mmlu_prehistory | 0|alias | - prehistory | | | | | |acc,none |0.77 | | | | | |acc_stderr,none|0.02 | | | |mmlu_professional_accounting | 0|alias | - professional_accounting | | | | | |acc,none |0.57 | | | | | |acc_stderr,none|0.03 | | | |mmlu_professional_law | 0|alias | - professional_law | | | | | |acc,none |0.50 | | | | | |acc_stderr,none|0.01 | | | |mmlu_professional_medicine | 0|alias | - professional_medicine | | | | | |acc,none |0.71 | | | | | |acc_stderr,none|0.03 | | | |mmlu_professional_psychology | 0|alias | - professional_psychology | | | | | |acc,none |0.73 | | | | | |acc_stderr,none|0.02 | | | |mmlu_public_relations | 0|alias | - public_relations | | | | | |acc,none |0.76 | | | | | |acc_stderr,none|0.04 | | | |mmlu_security_studies | 0|alias | - security_studies | | | | | |acc,none |0.78 | | | | | |acc_stderr,none|0.03 | | | |mmlu_social_sciences |N/A |alias | - social_sciences | | | | | |acc,none |0.81 | | | | | |acc_stderr,none|0.01 | | | |mmlu_sociology | 0|alias | - sociology | | | | | |acc,none |0.86 | | | | | |acc_stderr,none|0.02 | | | |mmlu_stem |N/A |alias | - stem | | | | | |acc,none |0.65 | | | | | |acc_stderr,none|0.01 | | | |mmlu_us_foreign_policy | 0|alias | - us_foreign_policy | | | | | |acc,none |0.92 | | | | | |acc_stderr,none|0.03 | | | |mmlu_virology | 0|alias | - virology | | | | | |acc,none |0.58 | | | | | |acc_stderr,none|0.04 | | | |mmlu_world_religions | 0|alias | - world_religions | | | | | |acc,none |0.82 | | | | | |acc_stderr,none|0.03 | | | Average: 69.95% ``` ### TruthfulQA ``` | Task |Version| Metric | Value | |Stderr| |--------------|-------|-----------------------|-----------------|---|------| |truthfulqa |N/A |bleu_acc,none | 0.45| | | | | |bleu_acc_stderr,none | 0.02| | | | | |rouge1_acc,none | 0.45| | | | | |rouge1_acc_stderr,none | 0.02| | | | | |rouge2_diff,none | 0.92| | | | | |rouge2_diff_stderr,none| 1.07| | | | | |bleu_max,none | 23.77| | | | | |bleu_max_stderr,none | 0.81| | | | | |rouge2_acc,none | 0.38| | | | | |rouge2_acc_stderr,none | 0.02| | | | | |acc,none | 0.41| | | | | |acc_stderr,none | 0.01| | | | | |rougeL_diff,none | 1.57| | | | | |rougeL_diff_stderr,none| 0.93| | | | | |rougeL_acc,none | 0.46| | | | | |rougeL_acc_stderr,none | 0.02| | | | | |bleu_diff,none | 1.38| | | | | |bleu_diff_stderr,none | 0.75| | | | | |rouge2_max,none | 33.01| | | | | |rouge2_max_stderr,none | 1.05| | | | | |rouge1_diff,none | 1.72| | | | | |rouge1_diff_stderr,none| 0.92| | | | | |rougeL_max,none | 45.25| | | | | |rougeL_max_stderr,none | 0.92| | | | | |rouge1_max,none | 48.29| | | | | |rouge1_max_stderr,none | 0.90| | | | | |alias |truthfulqa | | | |truthfulqa_gen| 3|bleu_max,none | 23.77| | | | | |bleu_max_stderr,none | 0.81| | | | | |bleu_acc,none | 0.45| | | | | |bleu_acc_stderr,none | 0.02| | | | | |bleu_diff,none | 1.38| | | | | |bleu_diff_stderr,none | 0.75| | | | | |rouge1_max,none | 48.29| | | | | |rouge1_max_stderr,none | 0.90| | | | | |rouge1_acc,none | 0.45| | | | | |rouge1_acc_stderr,none | 0.02| | | | | |rouge1_diff,none | 1.72| | | | | |rouge1_diff_stderr,none| 0.92| | | | | |rouge2_max,none | 33.01| | | | | |rouge2_max_stderr,none | 1.05| | | | | |rouge2_acc,none | 0.38| | | | | |rouge2_acc_stderr,none | 0.02| | | | | |rouge2_diff,none | 0.92| | | | | |rouge2_diff_stderr,none| 1.07| | | | | |rougeL_max,none | 45.25| | | | | |rougeL_max_stderr,none | 0.92| | | | | |rougeL_acc,none | 0.46| | | | | |rougeL_acc_stderr,none | 0.02| | | | | |rougeL_diff,none | 1.57| | | | | |rougeL_diff_stderr,none| 0.93| | | | | |alias | - truthfulqa_gen| | | |truthfulqa_mc1| 2|acc,none | 0.33| | | | | |acc_stderr,none | 0.02| | | | | |alias | - truthfulqa_mc1| | | |truthfulqa_mc2| 2|acc,none | 0.49| | | | | |acc_stderr,none | 0.02| | | | | |alias | - truthfulqa_mc2| | | Average: 48.59% ``` ### Winogrande ``` | Task |Version| Metric | Value | |Stderr| |----------|------:|---------------|----------|---|------| |winogrande| 1|acc,none | 0.77| | | | | |acc_stderr,none| 0.01| | | | | |alias |winogrande| | | Average: 77.35% ``` ### GSM8K ``` |Task |Version| Metric |Value| |Stderr| |-----|------:|-----------------------------------|-----|---|------| |gsm8k| 3|exact_match,strict-match | 0.67| | | | | |exact_match_stderr,strict-match | 0.01| | | | | |exact_match,flexible-extract | 0.68| | | | | |exact_match_stderr,flexible-extract| 0.01| | | | | |alias |gsm8k| | | Average: 67.48% ``` **Average score**: 67.48%
{"license": "apache-2.0"}
task
[ "TRANSLATION" ]
46,037
batterydata/batteryonlybert-cased
batterydata
fill-mask
[ "transformers", "pytorch", "bert", "fill-mask", "exbert", "en", "dataset:batterypapers", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-03T19:09:24Z
2022-03-05T16:04:11+00:00
121
0
--- datasets: - batterypapers language: en license: apache-2.0 tags: - exbert --- # BatteryOnlyBERT-uncased model Pretrained model on a large corpus of battery research papers using a masked language modeling (MLM) objective. It was introduced in [this paper](paper_link) and first released in [this repository](https://github.com/ShuHuang/batterybert). This model is uncased: it does not make a difference between english and English. ## Model description BatteryOnlyBERT is a transformers model pretrained on a large corpus of battery research papers in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Training data The BatteryOnlyBERT model was pretrained on the full text of battery papers only. The paper corpus contains a total of 400,366 battery research papers that are published from 2000 to June 2021, from the publishers Royal Society of Chemistry (RSC), Elsevier, and Springer. The list of DOIs can be found at [Github](https://github.com/ShuHuang/batterybert/blob/main/corpus.txt). ## Training procedure ### Preprocessing The texts are tokenized using WordPiece and a vocabulary size of 30,522. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The model was trained on 8 NVIDIA DGX A100 GPUs for 1,500,000 steps with a batch size of 256. The sequence length was limited to 512 tokens. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ## Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=batterybert) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='batterydata/batteryonlybert-uncased') >>> unmasker("Hello I'm a <mask> model.") ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('batterydata/batteryonlybert-uncased') model = BertModel.from_pretrained('batterydata/batteryonlybert-uncased') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('batterydata/batteryonlybert-uncased') model = TFBertModel.from_pretrained('batterydata/batteryonlybert-uncased') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results Final loss: 1.1012. ## Authors Shu Huang: `sh2009 [at] cam.ac.uk` Jacqueline Cole: `jmc61 [at] cam.ac.uk` ## Citation BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
null
Non_BioNLP
# BatteryOnlyBERT-uncased model Pretrained model on a large corpus of battery research papers using a masked language modeling (MLM) objective. It was introduced in [this paper](paper_link) and first released in [this repository](https://github.com/ShuHuang/batterybert). This model is uncased: it does not make a difference between english and English. ## Model description BatteryOnlyBERT is a transformers model pretrained on a large corpus of battery research papers in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Training data The BatteryOnlyBERT model was pretrained on the full text of battery papers only. The paper corpus contains a total of 400,366 battery research papers that are published from 2000 to June 2021, from the publishers Royal Society of Chemistry (RSC), Elsevier, and Springer. The list of DOIs can be found at [Github](https://github.com/ShuHuang/batterybert/blob/main/corpus.txt). ## Training procedure ### Preprocessing The texts are tokenized using WordPiece and a vocabulary size of 30,522. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The model was trained on 8 NVIDIA DGX A100 GPUs for 1,500,000 steps with a batch size of 256. The sequence length was limited to 512 tokens. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ## Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=batterybert) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='batterydata/batteryonlybert-uncased') >>> unmasker("Hello I'm a <mask> model.") ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('batterydata/batteryonlybert-uncased') model = BertModel.from_pretrained('batterydata/batteryonlybert-uncased') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('batterydata/batteryonlybert-uncased') model = TFBertModel.from_pretrained('batterydata/batteryonlybert-uncased') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results Final loss: 1.1012. ## Authors Shu Huang: `sh2009 [at] cam.ac.uk` Jacqueline Cole: `jmc61 [at] cam.ac.uk` ## Citation BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
{"datasets": ["batterypapers"], "language": "en", "license": "apache-2.0", "tags": ["exbert"]}
task
[ "QUESTION_ANSWERING" ]
46,038
TheBloke/MistralLite-7B-AWQ
TheBloke
text-generation
[ "transformers", "safetensors", "mistral", "text-generation", "base_model:amazon/MistralLite", "base_model:quantized:amazon/MistralLite", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
2023-10-19T10:55:29Z
2023-11-09T18:16:55+00:00
18
8
--- base_model: amazon/MistralLite license: apache-2.0 model_name: MistralLite 7B inference: false model_creator: Amazon Web Services model_type: mistral prompt_template: '<|prompter|>{prompt}</s><|assistant|> ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # MistralLite 7B - AWQ - Model creator: [Amazon Web Services](https://huggingface.co/amazon) - Original model: [MistralLite 7B](https://huggingface.co/amazon/MistralLite) <!-- description start --> ## Description This repo contains AWQ model files for [Amazon Web Services's MistralLite 7B](https://huggingface.co/amazon/MistralLite). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/MistralLite-7B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/MistralLite-7B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/MistralLite-7B-GGUF) * [Amazon Web Services's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/amazon/MistralLite) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Amazon ``` <|prompter|>{prompt}</s><|assistant|> ``` <!-- prompt-template end --> <!-- README_AWQ.md-provided-files start --> ## Provided files, and AWQ parameters For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/MistralLite-7B-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.15 GB <!-- README_AWQ.md-provided-files end --> <!-- README_AWQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/MistralLite-7B-AWQ`. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `MistralLite-7B-AWQ` 7. Select **Loader: AutoAWQ**. 8. Click Load, and the model will load and is now ready for use. 9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. 10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_AWQ.md-text-generation-webui end --> <!-- README_AWQ.md-use-from-vllm start --> ## Multi-user inference server: vLLM Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). - Please ensure you are using vLLM version 0.2 or later. - When using vLLM as a server, pass the `--quantization awq` parameter. For example: ```shell python3 python -m vllm.entrypoints.api_server --model TheBloke/MistralLite-7B-AWQ --quantization awq ``` - When using vLLM from Python code, again set `quantization=awq`. For example: ```python from vllm import LLM, SamplingParams prompts = [ "Tell me about AI", "Write a story about llamas", "What is 291 - 150?", "How much wood would a woodchuck chuck if a woodchuck could chuck wood?", ] prompt_template=f'''<|prompter|>{prompt}</s><|assistant|> ''' prompts = [prompt_template.format(prompt=prompt) for prompt in prompts] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TheBloke/MistralLite-7B-AWQ", quantization="awq", dtype="auto") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` <!-- README_AWQ.md-use-from-vllm start --> <!-- README_AWQ.md-use-from-tgi start --> ## Multi-user inference server: Hugging Face Text Generation Inference (TGI) Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/MistralLite-7B-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''<|prompter|>{prompt}</s><|assistant|> ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: ", response) ``` <!-- README_AWQ.md-use-from-tgi end --> <!-- README_AWQ.md-use-from-python start --> ## Inference from Python code using AutoAWQ ### Install the AutoAWQ package Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.1 or later. ```shell pip3 install autoawq ``` If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y autoawq git clone https://github.com/casper-hansen/AutoAWQ cd AutoAWQ pip3 install . ``` ### AutoAWQ example code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer model_name_or_path = "TheBloke/MistralLite-7B-AWQ" # Load tokenizer tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False) # Load model model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True, trust_remote_code=False, safetensors=True) prompt = "Tell me about AI" prompt_template=f'''<|prompter|>{prompt}</s><|assistant|> ''' print("*** Running model.generate:") token_input = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() # Generate output generation_output = model.generate( token_input, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, max_new_tokens=512 ) # Get the tokens from the output, decode them, print them token_output = generation_output[0] text_output = tokenizer.decode(token_output) print("LLM output: ", text_output) """ # Inference should be possible with transformers pipeline as well in future # But currently this is not yet supported by AutoAWQ (correct as of September 25th 2023) from transformers import pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) """ ``` <!-- README_AWQ.md-use-from-python end --> <!-- README_AWQ.md-compatibility start --> ## Compatibility The files provided are tested to work with: - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`. - [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later. - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later. <!-- README_AWQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Amazon Web Services's MistralLite 7B # MistralLite Model MistralLite is a fine-tuned [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) language model, with enhanced capabilities of processing long context (up to 32K tokens). By utilizing an adapted Rotary Embedding and sliding window during fine-tuning, MistralLite is able to **perform significantly better on several long context retrieve and answering tasks**, while keeping the simple model structure of the original model. MistralLite is useful for applications such as long context line and topic retrieval, summarization, question-answering, and etc. MistralLite can be deployed on a single AWS `g5.2x` instance with Sagemaker [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) endpoint, making it suitable for applications that require high performance in resource-constrained environments. You can also serve the MistralLite model directly using TGI docker containers. Also, MistralLite supports other ways of serving like [vLLM](https://github.com/vllm-project/vllm), and you can use MistralLite in Python by using the [HuggingFace transformers](https://huggingface.co/docs/transformers/index) and [FlashAttention-2](https://github.com/Dao-AILab/flash-attention) library. MistralLite is similar to [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1), and their similarities and differences are summarized below: |Model|Fine-tuned on long contexts| Max context length| RotaryEmbedding adaptation| Sliding Window Size| |----------|-------------:|------------:|-----------:|-----------:| | Mistral-7B-Instruct-v0.1 | up to 8K tokens | 32K | rope_theta = 10000 | 4096 | | MistralLite | up to 16K tokens | 32K | **rope_theta = 1000000** | **16384** | ## Motivation of Developing MistralLite Since the release of [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1), the model became increasingly popular because its strong performance on a wide range of benchmarks. But most of the benchmarks are evaluated on `short context`, and not much has been investigated on its performance on long context tasks. Then We evaluated `Mistral-7B-Instruct-v0.1` against benchmarks that are specifically designed to assess the capabilities of LLMs in handling longer context. Although the performance of the models on long context was fairly competitive on long context less than 4096 tokens, there were some limitations on its performance on longer context. Motivated by improving its performance on longer context, we finetuned the Mistral 7B model, and produced `Mistrallite`. The model managed to `significantly boost the performance of long context handling` over Mistral-7B-Instruct-v0.1. The detailed `long context evalutaion results` are as below: 1. [Topic Retrieval](https://lmsys.org/blog/2023-06-29-longchat/) |Model Name|Input length| Input length | Input length| Input length| Input length| |----------|-------------:|-------------:|------------:|-----------:|-----------:| | | 2851| 5568 |8313 | 11044 | 13780 | Mistral-7B-Instruct-v0.1 | 100% | 50% | 2% | 0% | 0% | | MistralLite | **100%** | **100%** | **100%** | **100%** | **98%** | 2. [Line Retrieval](https://lmsys.org/blog/2023-06-29-longchat/#longeval-results) |Model Name|Input length| Input length | Input length| Input length| Input length|Input length| |----------|-------------:|-------------:|------------:|-----------:|-----------:|-----------:| | | 3818| 5661 |7505 | 9354 | 11188 | 12657 | Mistral-7B-Instruct-v0.1 | **98%** | 62% | 42% | 42% | 32% | 30% | | MistralLite | **98%** | **92%** | **88%** | **76%** | **70%** | **60%** | 3. [Pass key Retrieval](https://github.com/epfml/landmark-attention/blob/main/llama/run_test.py#L101) |Model Name|Input length| Input length | Input length| Input length| |----------|-------------:|-------------:|------------:|-----------:| | | 3264| 5396 |8329 | 10197 | | Mistral-7B-Instruct-v0.1 | **100%** | 50% | 20% | 30% | | MistralLite | **100%** | **100%** | **100%** | **100%** | 4. [Question Answering with Long Input Texts](https://nyu-mll.github.io/quality/) |Model Name| Test set Accuracy | Hard subset Accuracy| |----------|-------------:|-------------:| | Mistral-7B-Instruct-v0.1 | 44.3% | 39.7% | | MistralLite | **64.4%** | **56.2%** | ### Example test on long context of 13400 tokens - Context is from [Amazon Aurora FAQs](https://aws.amazon.com/rds/aurora/faqs/) - Question: *please tell me how does pgvector help with Generative AI and give me some examples.* - Answer from MistralLite: ```code pgvector is an open-source extension for PostgreSQL supported by Amazon Aurora PostgreSQL-Compatible Edition. You can use pgvector to store, search, index, and query billions of embeddings that are generated from machine learning (ML) and artificial intelligence (AI) models in your database, such as those from Amazon Bedrock (limited preview) or Amazon SageMaker. A vector embedding is a numerical representation that represents the semantic meaning of content such as text, images, and video. With pgvector, you can query embeddings in your Aurora PostgreSQL database to perform efficient semantic similarity searches of these data types, represented as vectors, combined with other tabular data in Aurora. This enables the use of generative AI and other AI/ML systems for new types of applications such as personalized recommendations based on similar text descriptions or images, candidate match based on interview notes, customer service next best action recommendations based on successful transcripts or chat session dialogs, and more. ``` ## Model Details - **Developed by:** [AWS Contributors](https://github.com/orgs/aws-samples/teams/aws-prototype-ml-apac) - **Model type:** [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) - **Language:** English - **Finetuned from weights:** [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) - **Finetuned on data:** - [SLidingEncoder and Decoder (SLED)](https://huggingface.co/datasets/tau/sled) - [(Long) Natural Questions (NQ)](https://huggingface.co/datasets/togethercomputer/Long-Data-Collections#multi-passage-qa-from-natural-questions) - [OpenAssistant Conversations Dataset (OASST1)](https://huggingface.co/datasets/OpenAssistant/oasst1) - **Supported Serving Framework:** - [Text-Generation-Inference 1.1.0](https://github.com/huggingface/text-generation-inference/tree/v1.1.0) - [vLLM](https://github.com/vllm-project/vllm) - [HuggingFace transformers](https://huggingface.co/docs/transformers/index) - [HuggingFace Text Generation Inference (TGI) container on SageMaker](https://github.com/awslabs/llm-hosting-container) - **Model License:** Apache 2.0 - **Contact:** [GitHub issues](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/issues) - **Inference Code** [Github Repo](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/blob/main/MistralLite/) ## How to Use MistralLite from Python Code (HuggingFace transformers) ## **Important** - For an end-to-end example Jupyter notebook, please refer to [this link](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/blob/main/MistralLite/huggingface-transformers/example_usage.ipynb). ### Install the necessary packages Requires: [transformers](https://pypi.org/project/transformers/) 4.34.0 or later, [flash-attn](https://pypi.org/project/flash-attn/) 2.3.1.post1 or later, and [accelerate](https://pypi.org/project/accelerate/) 0.23.0 or later. ```shell pip install transformers==4.34.0 pip install flash-attn==2.3.1.post1 --no-build-isolation pip install accelerate==0.23.0 ``` ### You can then try the following example code ```python from transformers import AutoModelForCausalLM, AutoTokenizer import transformers import torch model_id = "amazon/MistralLite" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, use_flash_attention_2=True, device_map="auto",) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, ) prompt = "<|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|>" sequences = pipeline( prompt, max_new_tokens=400, do_sample=False, return_full_text=False, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"{seq['generated_text']}") ``` **Important** - Use the prompt template below for MistralLite: ``` <|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|> ``` ## How to Serve MistralLite on TGI ## **Important:** - For an end-to-end example Jupyter notebook using the native TGI container, please refer to [this link](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/blob/main/MistralLite/tgi/example_usage.ipynb). - If the **input context length is greater than 12K tokens**, it is recommended using a custom TGI container, please refer to [this link](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/blob/main/MistralLite/tgi-custom/example_usage.ipynb). ### Start TGI server ### Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell docker run -d --gpus all --shm-size 1g -p 443:80 -v $(pwd)/models:/data ghcr.io/huggingface/text-generation-inference:1.1.0 \ --model-id amazon/MistralLite \ --max-input-length 16000 \ --max-total-tokens 16384 \ --max-batch-prefill-tokens 16384 \ --trust-remote-code ``` ### Perform Inference ### Example Python code for inference with TGI (requires `text_generation` 0.6.1 or later): ```shell pip install text_generation==0.6.1 ``` ```python from text_generation import Client SERVER_PORT = 443 SERVER_HOST = "localhost" SERVER_URL = f"{SERVER_HOST}:{SERVER_PORT}" tgi_client = Client(f"http://{SERVER_URL}", timeout=60) def invoke_tgi(prompt, random_seed=1, max_new_tokens=400, print_stream=True, assist_role=True): if (assist_role): prompt = f"<|prompter|>{prompt}</s><|assistant|>" output = "" for response in tgi_client.generate_stream( prompt, do_sample=False, max_new_tokens=max_new_tokens, return_full_text=False, #temperature=None, #truncate=None, #seed=random_seed, #typical_p=0.2, ): if hasattr(response, "token"): if not response.token.special: snippet = response.token.text output += snippet if (print_stream): print(snippet, end='', flush=True) return output prompt = "What are the main challenges to support a long context for LLM?" result = invoke_tgi(prompt) ``` **Important** - When using MistralLite for inference for the first time, it may require a brief 'warm-up' period that can take 10s of seconds. However, subsequent inferences should be faster and return results in a more timely manner. This warm-up period is normal and should not affect the overall performance of the system once the initialisation period has been completed. ## How to Deploy MistralLite on Amazon SageMaker ## **Important:** - For an end-to-end example Jupyter notebook using the SageMaker built-in container, please refer to [this link](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/blob/main/MistralLite/sagemaker-tgi/example_usage.ipynb). - If the **input context length is greater than 12K tokens**, it is recommended using a custom docker container, please refer to [this link](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/blob/main/MistralLite/sagemaker-tgi-custom/example_usage.ipynb). ### Install the necessary packages Requires: [sagemaker](https://pypi.org/project/sagemaker/) 2.192.1 or later. ```shell pip install sagemaker==2.192.1 ``` ### Deploy the Model as A SageMaker Endpoint ### To deploy MistralLite on a SageMaker endpoint, please follow the example code as below. ```python import sagemaker from sagemaker.huggingface import HuggingFaceModel, get_huggingface_llm_image_uri import time sagemaker_session = sagemaker.Session() region = sagemaker_session.boto_region_name role = sagemaker.get_execution_role() image_uri = get_huggingface_llm_image_uri( backend="huggingface", # or lmi region=region, version="1.1.0" ) model_name = "MistralLite-" + time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime()) hub = { 'HF_MODEL_ID':'amazon/MistralLite', 'HF_TASK':'text-generation', 'SM_NUM_GPUS':'1', "MAX_INPUT_LENGTH": '16000', "MAX_TOTAL_TOKENS": '16384', "MAX_BATCH_PREFILL_TOKENS": '16384', "MAX_BATCH_TOTAL_TOKENS": '16384', } model = HuggingFaceModel( name=model_name, env=hub, role=role, image_uri=image_uri ) predictor = model.deploy( initial_instance_count=1, instance_type="ml.g5.2xlarge", endpoint_name=model_name, ) ``` ### Perform Inference ### To call the endpoint, please follow the example code as below: ```python input_data = { "inputs": "<|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|>", "parameters": { "do_sample": False, "max_new_tokens": 400, "return_full_text": False, #"typical_p": 0.2, #"temperature":None, #"truncate":None, #"seed": 1, } } result = predictor.predict(input_data)[0]["generated_text"] print(result) ``` or via [boto3](https://pypi.org/project/boto3/), and the example code is shown as below: ```python import boto3 import json def call_endpoint(client, prompt, endpoint_name, paramters): client = boto3.client("sagemaker-runtime") payload = {"inputs": prompt, "parameters": parameters} response = client.invoke_endpoint(EndpointName=endpoint_name, Body=json.dumps(payload), ContentType="application/json") output = json.loads(response["Body"].read().decode()) result = output[0]["generated_text"] return result client = boto3.client("sagemaker-runtime") parameters = { "do_sample": False, "max_new_tokens": 400, "return_full_text": False, #"typical_p": 0.2, #"temperature":None, #"truncate":None, #"seed": 1, } endpoint_name = predictor.endpoint_name prompt = "<|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|>" result = call_endpoint(client, prompt, endpoint_name, parameters) print(result) ``` ## How to Serve MistralLite on vLLM ## Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). **Important** - For an end-to-end example Jupyter notebook, please refer to [this link](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/blob/main/MistralLite/vllm/example_usage.ipynb). ### Using vLLM as a server ### When using vLLM as a server, pass the --model amazon/MistralLite parameter, for example: ```shell python3 -m vllm.entrypoints.api_server --model amazon/MistralLite ``` ### Using vLLM in Python Code ### When using vLLM from Python code, Please see the example code as below: ```python from vllm import LLM, SamplingParams prompts = [ "<|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|>", ] sampling_params = SamplingParams(temperature=0, max_tokens=100) llm = LLM(model="amazon/MistralLite",) outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` ## Limitations ## Before using the MistralLite model, it is important to perform your own independent assessment, and take measures to ensure that your use would comply with your own specific quality control practices and standards, and that your use would comply with the local rules, laws, regulations, licenses and terms that apply to you, and your content.
null
Non_BioNLP
<!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # MistralLite 7B - AWQ - Model creator: [Amazon Web Services](https://huggingface.co/amazon) - Original model: [MistralLite 7B](https://huggingface.co/amazon/MistralLite) <!-- description start --> ## Description This repo contains AWQ model files for [Amazon Web Services's MistralLite 7B](https://huggingface.co/amazon/MistralLite). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/MistralLite-7B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/MistralLite-7B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/MistralLite-7B-GGUF) * [Amazon Web Services's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/amazon/MistralLite) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Amazon ``` <|prompter|>{prompt}</s><|assistant|> ``` <!-- prompt-template end --> <!-- README_AWQ.md-provided-files start --> ## Provided files, and AWQ parameters For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/MistralLite-7B-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.15 GB <!-- README_AWQ.md-provided-files end --> <!-- README_AWQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/MistralLite-7B-AWQ`. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `MistralLite-7B-AWQ` 7. Select **Loader: AutoAWQ**. 8. Click Load, and the model will load and is now ready for use. 9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. 10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_AWQ.md-text-generation-webui end --> <!-- README_AWQ.md-use-from-vllm start --> ## Multi-user inference server: vLLM Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). - Please ensure you are using vLLM version 0.2 or later. - When using vLLM as a server, pass the `--quantization awq` parameter. For example: ```shell python3 python -m vllm.entrypoints.api_server --model TheBloke/MistralLite-7B-AWQ --quantization awq ``` - When using vLLM from Python code, again set `quantization=awq`. For example: ```python from vllm import LLM, SamplingParams prompts = [ "Tell me about AI", "Write a story about llamas", "What is 291 - 150?", "How much wood would a woodchuck chuck if a woodchuck could chuck wood?", ] prompt_template=f'''<|prompter|>{prompt}</s><|assistant|> ''' prompts = [prompt_template.format(prompt=prompt) for prompt in prompts] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TheBloke/MistralLite-7B-AWQ", quantization="awq", dtype="auto") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` <!-- README_AWQ.md-use-from-vllm start --> <!-- README_AWQ.md-use-from-tgi start --> ## Multi-user inference server: Hugging Face Text Generation Inference (TGI) Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/MistralLite-7B-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''<|prompter|>{prompt}</s><|assistant|> ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: ", response) ``` <!-- README_AWQ.md-use-from-tgi end --> <!-- README_AWQ.md-use-from-python start --> ## Inference from Python code using AutoAWQ ### Install the AutoAWQ package Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.1 or later. ```shell pip3 install autoawq ``` If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y autoawq git clone https://github.com/casper-hansen/AutoAWQ cd AutoAWQ pip3 install . ``` ### AutoAWQ example code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer model_name_or_path = "TheBloke/MistralLite-7B-AWQ" # Load tokenizer tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False) # Load model model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True, trust_remote_code=False, safetensors=True) prompt = "Tell me about AI" prompt_template=f'''<|prompter|>{prompt}</s><|assistant|> ''' print("*** Running model.generate:") token_input = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() # Generate output generation_output = model.generate( token_input, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, max_new_tokens=512 ) # Get the tokens from the output, decode them, print them token_output = generation_output[0] text_output = tokenizer.decode(token_output) print("LLM output: ", text_output) """ # Inference should be possible with transformers pipeline as well in future # But currently this is not yet supported by AutoAWQ (correct as of September 25th 2023) from transformers import pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) """ ``` <!-- README_AWQ.md-use-from-python end --> <!-- README_AWQ.md-compatibility start --> ## Compatibility The files provided are tested to work with: - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`. - [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later. - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later. <!-- README_AWQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Amazon Web Services's MistralLite 7B # MistralLite Model MistralLite is a fine-tuned [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) language model, with enhanced capabilities of processing long context (up to 32K tokens). By utilizing an adapted Rotary Embedding and sliding window during fine-tuning, MistralLite is able to **perform significantly better on several long context retrieve and answering tasks**, while keeping the simple model structure of the original model. MistralLite is useful for applications such as long context line and topic retrieval, summarization, question-answering, and etc. MistralLite can be deployed on a single AWS `g5.2x` instance with Sagemaker [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) endpoint, making it suitable for applications that require high performance in resource-constrained environments. You can also serve the MistralLite model directly using TGI docker containers. Also, MistralLite supports other ways of serving like [vLLM](https://github.com/vllm-project/vllm), and you can use MistralLite in Python by using the [HuggingFace transformers](https://huggingface.co/docs/transformers/index) and [FlashAttention-2](https://github.com/Dao-AILab/flash-attention) library. MistralLite is similar to [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1), and their similarities and differences are summarized below: |Model|Fine-tuned on long contexts| Max context length| RotaryEmbedding adaptation| Sliding Window Size| |----------|-------------:|------------:|-----------:|-----------:| | Mistral-7B-Instruct-v0.1 | up to 8K tokens | 32K | rope_theta = 10000 | 4096 | | MistralLite | up to 16K tokens | 32K | **rope_theta = 1000000** | **16384** | ## Motivation of Developing MistralLite Since the release of [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1), the model became increasingly popular because its strong performance on a wide range of benchmarks. But most of the benchmarks are evaluated on `short context`, and not much has been investigated on its performance on long context tasks. Then We evaluated `Mistral-7B-Instruct-v0.1` against benchmarks that are specifically designed to assess the capabilities of LLMs in handling longer context. Although the performance of the models on long context was fairly competitive on long context less than 4096 tokens, there were some limitations on its performance on longer context. Motivated by improving its performance on longer context, we finetuned the Mistral 7B model, and produced `Mistrallite`. The model managed to `significantly boost the performance of long context handling` over Mistral-7B-Instruct-v0.1. The detailed `long context evalutaion results` are as below: 1. [Topic Retrieval](https://lmsys.org/blog/2023-06-29-longchat/) |Model Name|Input length| Input length | Input length| Input length| Input length| |----------|-------------:|-------------:|------------:|-----------:|-----------:| | | 2851| 5568 |8313 | 11044 | 13780 | Mistral-7B-Instruct-v0.1 | 100% | 50% | 2% | 0% | 0% | | MistralLite | **100%** | **100%** | **100%** | **100%** | **98%** | 2. [Line Retrieval](https://lmsys.org/blog/2023-06-29-longchat/#longeval-results) |Model Name|Input length| Input length | Input length| Input length| Input length|Input length| |----------|-------------:|-------------:|------------:|-----------:|-----------:|-----------:| | | 3818| 5661 |7505 | 9354 | 11188 | 12657 | Mistral-7B-Instruct-v0.1 | **98%** | 62% | 42% | 42% | 32% | 30% | | MistralLite | **98%** | **92%** | **88%** | **76%** | **70%** | **60%** | 3. [Pass key Retrieval](https://github.com/epfml/landmark-attention/blob/main/llama/run_test.py#L101) |Model Name|Input length| Input length | Input length| Input length| |----------|-------------:|-------------:|------------:|-----------:| | | 3264| 5396 |8329 | 10197 | | Mistral-7B-Instruct-v0.1 | **100%** | 50% | 20% | 30% | | MistralLite | **100%** | **100%** | **100%** | **100%** | 4. [Question Answering with Long Input Texts](https://nyu-mll.github.io/quality/) |Model Name| Test set Accuracy | Hard subset Accuracy| |----------|-------------:|-------------:| | Mistral-7B-Instruct-v0.1 | 44.3% | 39.7% | | MistralLite | **64.4%** | **56.2%** | ### Example test on long context of 13400 tokens - Context is from [Amazon Aurora FAQs](https://aws.amazon.com/rds/aurora/faqs/) - Question: *please tell me how does pgvector help with Generative AI and give me some examples.* - Answer from MistralLite: ```code pgvector is an open-source extension for PostgreSQL supported by Amazon Aurora PostgreSQL-Compatible Edition. You can use pgvector to store, search, index, and query billions of embeddings that are generated from machine learning (ML) and artificial intelligence (AI) models in your database, such as those from Amazon Bedrock (limited preview) or Amazon SageMaker. A vector embedding is a numerical representation that represents the semantic meaning of content such as text, images, and video. With pgvector, you can query embeddings in your Aurora PostgreSQL database to perform efficient semantic similarity searches of these data types, represented as vectors, combined with other tabular data in Aurora. This enables the use of generative AI and other AI/ML systems for new types of applications such as personalized recommendations based on similar text descriptions or images, candidate match based on interview notes, customer service next best action recommendations based on successful transcripts or chat session dialogs, and more. ``` ## Model Details - **Developed by:** [AWS Contributors](https://github.com/orgs/aws-samples/teams/aws-prototype-ml-apac) - **Model type:** [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) - **Language:** English - **Finetuned from weights:** [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) - **Finetuned on data:** - [SLidingEncoder and Decoder (SLED)](https://huggingface.co/datasets/tau/sled) - [(Long) Natural Questions (NQ)](https://huggingface.co/datasets/togethercomputer/Long-Data-Collections#multi-passage-qa-from-natural-questions) - [OpenAssistant Conversations Dataset (OASST1)](https://huggingface.co/datasets/OpenAssistant/oasst1) - **Supported Serving Framework:** - [Text-Generation-Inference 1.1.0](https://github.com/huggingface/text-generation-inference/tree/v1.1.0) - [vLLM](https://github.com/vllm-project/vllm) - [HuggingFace transformers](https://huggingface.co/docs/transformers/index) - [HuggingFace Text Generation Inference (TGI) container on SageMaker](https://github.com/awslabs/llm-hosting-container) - **Model License:** Apache 2.0 - **Contact:** [GitHub issues](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/issues) - **Inference Code** [Github Repo](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/blob/main/MistralLite/) ## How to Use MistralLite from Python Code (HuggingFace transformers) ## **Important** - For an end-to-end example Jupyter notebook, please refer to [this link](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/blob/main/MistralLite/huggingface-transformers/example_usage.ipynb). ### Install the necessary packages Requires: [transformers](https://pypi.org/project/transformers/) 4.34.0 or later, [flash-attn](https://pypi.org/project/flash-attn/) 2.3.1.post1 or later, and [accelerate](https://pypi.org/project/accelerate/) 0.23.0 or later. ```shell pip install transformers==4.34.0 pip install flash-attn==2.3.1.post1 --no-build-isolation pip install accelerate==0.23.0 ``` ### You can then try the following example code ```python from transformers import AutoModelForCausalLM, AutoTokenizer import transformers import torch model_id = "amazon/MistralLite" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, use_flash_attention_2=True, device_map="auto",) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, ) prompt = "<|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|>" sequences = pipeline( prompt, max_new_tokens=400, do_sample=False, return_full_text=False, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"{seq['generated_text']}") ``` **Important** - Use the prompt template below for MistralLite: ``` <|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|> ``` ## How to Serve MistralLite on TGI ## **Important:** - For an end-to-end example Jupyter notebook using the native TGI container, please refer to [this link](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/blob/main/MistralLite/tgi/example_usage.ipynb). - If the **input context length is greater than 12K tokens**, it is recommended using a custom TGI container, please refer to [this link](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/blob/main/MistralLite/tgi-custom/example_usage.ipynb). ### Start TGI server ### Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell docker run -d --gpus all --shm-size 1g -p 443:80 -v $(pwd)/models:/data ghcr.io/huggingface/text-generation-inference:1.1.0 \ --model-id amazon/MistralLite \ --max-input-length 16000 \ --max-total-tokens 16384 \ --max-batch-prefill-tokens 16384 \ --trust-remote-code ``` ### Perform Inference ### Example Python code for inference with TGI (requires `text_generation` 0.6.1 or later): ```shell pip install text_generation==0.6.1 ``` ```python from text_generation import Client SERVER_PORT = 443 SERVER_HOST = "localhost" SERVER_URL = f"{SERVER_HOST}:{SERVER_PORT}" tgi_client = Client(f"http://{SERVER_URL}", timeout=60) def invoke_tgi(prompt, random_seed=1, max_new_tokens=400, print_stream=True, assist_role=True): if (assist_role): prompt = f"<|prompter|>{prompt}</s><|assistant|>" output = "" for response in tgi_client.generate_stream( prompt, do_sample=False, max_new_tokens=max_new_tokens, return_full_text=False, #temperature=None, #truncate=None, #seed=random_seed, #typical_p=0.2, ): if hasattr(response, "token"): if not response.token.special: snippet = response.token.text output += snippet if (print_stream): print(snippet, end='', flush=True) return output prompt = "What are the main challenges to support a long context for LLM?" result = invoke_tgi(prompt) ``` **Important** - When using MistralLite for inference for the first time, it may require a brief 'warm-up' period that can take 10s of seconds. However, subsequent inferences should be faster and return results in a more timely manner. This warm-up period is normal and should not affect the overall performance of the system once the initialisation period has been completed. ## How to Deploy MistralLite on Amazon SageMaker ## **Important:** - For an end-to-end example Jupyter notebook using the SageMaker built-in container, please refer to [this link](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/blob/main/MistralLite/sagemaker-tgi/example_usage.ipynb). - If the **input context length is greater than 12K tokens**, it is recommended using a custom docker container, please refer to [this link](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/blob/main/MistralLite/sagemaker-tgi-custom/example_usage.ipynb). ### Install the necessary packages Requires: [sagemaker](https://pypi.org/project/sagemaker/) 2.192.1 or later. ```shell pip install sagemaker==2.192.1 ``` ### Deploy the Model as A SageMaker Endpoint ### To deploy MistralLite on a SageMaker endpoint, please follow the example code as below. ```python import sagemaker from sagemaker.huggingface import HuggingFaceModel, get_huggingface_llm_image_uri import time sagemaker_session = sagemaker.Session() region = sagemaker_session.boto_region_name role = sagemaker.get_execution_role() image_uri = get_huggingface_llm_image_uri( backend="huggingface", # or lmi region=region, version="1.1.0" ) model_name = "MistralLite-" + time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime()) hub = { 'HF_MODEL_ID':'amazon/MistralLite', 'HF_TASK':'text-generation', 'SM_NUM_GPUS':'1', "MAX_INPUT_LENGTH": '16000', "MAX_TOTAL_TOKENS": '16384', "MAX_BATCH_PREFILL_TOKENS": '16384', "MAX_BATCH_TOTAL_TOKENS": '16384', } model = HuggingFaceModel( name=model_name, env=hub, role=role, image_uri=image_uri ) predictor = model.deploy( initial_instance_count=1, instance_type="ml.g5.2xlarge", endpoint_name=model_name, ) ``` ### Perform Inference ### To call the endpoint, please follow the example code as below: ```python input_data = { "inputs": "<|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|>", "parameters": { "do_sample": False, "max_new_tokens": 400, "return_full_text": False, #"typical_p": 0.2, #"temperature":None, #"truncate":None, #"seed": 1, } } result = predictor.predict(input_data)[0]["generated_text"] print(result) ``` or via [boto3](https://pypi.org/project/boto3/), and the example code is shown as below: ```python import boto3 import json def call_endpoint(client, prompt, endpoint_name, paramters): client = boto3.client("sagemaker-runtime") payload = {"inputs": prompt, "parameters": parameters} response = client.invoke_endpoint(EndpointName=endpoint_name, Body=json.dumps(payload), ContentType="application/json") output = json.loads(response["Body"].read().decode()) result = output[0]["generated_text"] return result client = boto3.client("sagemaker-runtime") parameters = { "do_sample": False, "max_new_tokens": 400, "return_full_text": False, #"typical_p": 0.2, #"temperature":None, #"truncate":None, #"seed": 1, } endpoint_name = predictor.endpoint_name prompt = "<|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|>" result = call_endpoint(client, prompt, endpoint_name, parameters) print(result) ``` ## How to Serve MistralLite on vLLM ## Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). **Important** - For an end-to-end example Jupyter notebook, please refer to [this link](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/blob/main/MistralLite/vllm/example_usage.ipynb). ### Using vLLM as a server ### When using vLLM as a server, pass the --model amazon/MistralLite parameter, for example: ```shell python3 -m vllm.entrypoints.api_server --model amazon/MistralLite ``` ### Using vLLM in Python Code ### When using vLLM from Python code, Please see the example code as below: ```python from vllm import LLM, SamplingParams prompts = [ "<|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|>", ] sampling_params = SamplingParams(temperature=0, max_tokens=100) llm = LLM(model="amazon/MistralLite",) outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` ## Limitations ## Before using the MistralLite model, it is important to perform your own independent assessment, and take measures to ensure that your use would comply with your own specific quality control practices and standards, and that your use would comply with the local rules, laws, regulations, licenses and terms that apply to you, and your content.
{"base_model": "amazon/MistralLite", "license": "apache-2.0", "model_name": "MistralLite 7B", "inference": false, "model_creator": "Amazon Web Services", "model_type": "mistral", "prompt_template": "<|prompter|>{prompt}</s><|assistant|>\n", "quantized_by": "TheBloke"}
task
[ "QUESTION_ANSWERING", "SEMANTIC_SIMILARITY", "SUMMARIZATION" ]
46,039
metamath/distilbert-base-uncased-distilled-optim-clinc
metamath
text-classification
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-01-14T15:23:32Z
2024-01-14T15:27:25+00:00
7
0
--- base_model: distilbert-base-uncased datasets: - clinc_oos license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-distilled-optim-clinc results: - task: type: text-classification name: Text Classification dataset: name: clinc_oos type: clinc_oos config: plus split: validation args: plus metrics: - type: accuracy value: 0.9448387096774193 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-optim-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.3314 - Accuracy: 0.9448 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 318 | 2.1746 | 0.7242 | | 2.5724 | 2.0 | 636 | 1.1166 | 0.8623 | | 2.5724 | 3.0 | 954 | 0.6319 | 0.9155 | | 0.9951 | 4.0 | 1272 | 0.4582 | 0.9306 | | 0.4397 | 5.0 | 1590 | 0.3865 | 0.9394 | | 0.4397 | 6.0 | 1908 | 0.3583 | 0.9419 | | 0.2978 | 7.0 | 2226 | 0.3445 | 0.9429 | | 0.251 | 8.0 | 2544 | 0.3394 | 0.9426 | | 0.251 | 9.0 | 2862 | 0.3334 | 0.9445 | | 0.233 | 10.0 | 3180 | 0.3314 | 0.9448 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.1+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-optim-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.3314 - Accuracy: 0.9448 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 318 | 2.1746 | 0.7242 | | 2.5724 | 2.0 | 636 | 1.1166 | 0.8623 | | 2.5724 | 3.0 | 954 | 0.6319 | 0.9155 | | 0.9951 | 4.0 | 1272 | 0.4582 | 0.9306 | | 0.4397 | 5.0 | 1590 | 0.3865 | 0.9394 | | 0.4397 | 6.0 | 1908 | 0.3583 | 0.9419 | | 0.2978 | 7.0 | 2226 | 0.3445 | 0.9429 | | 0.251 | 8.0 | 2544 | 0.3394 | 0.9426 | | 0.251 | 9.0 | 2862 | 0.3334 | 0.9445 | | 0.233 | 10.0 | 3180 | 0.3314 | 0.9448 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.1+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0
{"base_model": "distilbert-base-uncased", "datasets": ["clinc_oos"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-distilled-optim-clinc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos", "config": "plus", "split": "validation", "args": "plus"}, "metrics": [{"type": "accuracy", "value": 0.9448387096774193, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,040
Helsinki-NLP/opus-mt-vi-it
Helsinki-NLP
translation
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "vi", "it", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2023-08-16T12:08:37+00:00
69
0
--- language: - vi - it license: apache-2.0 tags: - translation --- ### vie-ita * source group: Vietnamese * target group: Italian * OPUS readme: [vie-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-ita/README.md) * model: transformer-align * source language(s): vie * target language(s): ita * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-ita/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-ita/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-ita/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.vie.ita | 31.2 | 0.548 | ### System Info: - hf_name: vie-ita - source_languages: vie - target_languages: ita - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-ita/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['vi', 'it'] - src_constituents: {'vie', 'vie_Hani'} - tgt_constituents: {'ita'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-ita/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-ita/opus-2020-06-17.test.txt - src_alpha3: vie - tgt_alpha3: ita - short_pair: vi-it - chrF2_score: 0.5479999999999999 - bleu: 31.2 - brevity_penalty: 0.932 - ref_len: 1774.0 - src_name: Vietnamese - tgt_name: Italian - train_date: 2020-06-17 - src_alpha2: vi - tgt_alpha2: it - prefer_old: False - long_pair: vie-ita - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
null
Non_BioNLP
### vie-ita * source group: Vietnamese * target group: Italian * OPUS readme: [vie-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-ita/README.md) * model: transformer-align * source language(s): vie * target language(s): ita * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-ita/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-ita/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-ita/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.vie.ita | 31.2 | 0.548 | ### System Info: - hf_name: vie-ita - source_languages: vie - target_languages: ita - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-ita/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['vi', 'it'] - src_constituents: {'vie', 'vie_Hani'} - tgt_constituents: {'ita'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-ita/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-ita/opus-2020-06-17.test.txt - src_alpha3: vie - tgt_alpha3: ita - short_pair: vi-it - chrF2_score: 0.5479999999999999 - bleu: 31.2 - brevity_penalty: 0.932 - ref_len: 1774.0 - src_name: Vietnamese - tgt_name: Italian - train_date: 2020-06-17 - src_alpha2: vi - tgt_alpha2: it - prefer_old: False - long_pair: vie-ita - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["vi", "it"], "license": "apache-2.0", "tags": ["translation"]}
task
[ "TRANSLATION" ]
46,041
nizamudma/bart_cnn_auto
nizamudma
text2text-generation
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain", "unk", "dataset:nizamudma/autotrain-data-text1", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-06-27T21:36:09Z
2022-06-29T14:15:25+00:00
0
0
--- datasets: - nizamudma/autotrain-data-text1 language: unk tags: - a - u - t - o - r - i - n widget: - text: I love AutoTrain 🤗 co2_eq_emissions: 4581.794954519826 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 1046236000 - CO2 Emissions (in grams): 4581.794954519826 ## Validation Metrics - Loss: 1.4225560426712036 - Rouge1: 42.5931 - Rouge2: 20.0106 - RougeL: 29.681 - RougeLsum: 39.8097 - Gen Len: 84.9844 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/nizamudma/autotrain-text1-1046236000 ```
null
Non_BioNLP
# Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 1046236000 - CO2 Emissions (in grams): 4581.794954519826 ## Validation Metrics - Loss: 1.4225560426712036 - Rouge1: 42.5931 - Rouge2: 20.0106 - RougeL: 29.681 - RougeLsum: 39.8097 - Gen Len: 84.9844 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/nizamudma/autotrain-text1-1046236000 ```
{"datasets": ["nizamudma/autotrain-data-text1"], "language": "unk", "tags": ["a", "u", "t", "o", "r", "i", "n"], "widget": [{"text": "I love AutoTrain 🤗"}], "co2_eq_emissions": 4581.794954519826}
task
[ "SUMMARIZATION" ]
46,042
martimfasantos/TinyLlama-1.1B-MT-SLiC-DPO
martimfasantos
translation
[ "safetensors", "llama", "xcomet_xl_xxl", "generated_from_trainer", "translation", "dataset:sardinelab/MT-pref", "base_model:martimfasantos/TinyLlama-1.1B-MT-SFT", "base_model:finetune:martimfasantos/TinyLlama-1.1B-MT-SFT", "license:apache-2.0", "region:us" ]
2024-08-19T08:05:48Z
2025-01-02T16:47:51+00:00
10
0
--- base_model: martimfasantos/TinyLlama-1.1B-MT-SFT datasets: - sardinelab/MT-pref license: apache-2.0 pipeline_tag: translation tags: - xcomet_xl_xxl - generated_from_trainer model-index: - name: TinyLlama-1.1B-MT-SLiC-DPO results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # TinyLlama-1.1B-MT-SLiC-DPO This model is a fine-tuned version of [martimfasantos/TinyLlama-1.1B-MT-SFT](https://huggingface.co/martimfasantos/TinyLlama-1.1B-MT-SFT) on the sardinelab/MT-pref dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-07 - train_batch_size: 1 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 32 - total_train_batch_size: 64 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.43.3 - Pytorch 2.3.1+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # TinyLlama-1.1B-MT-SLiC-DPO This model is a fine-tuned version of [martimfasantos/TinyLlama-1.1B-MT-SFT](https://huggingface.co/martimfasantos/TinyLlama-1.1B-MT-SFT) on the sardinelab/MT-pref dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-07 - train_batch_size: 1 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 32 - total_train_batch_size: 64 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.43.3 - Pytorch 2.3.1+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
{"base_model": "martimfasantos/TinyLlama-1.1B-MT-SFT", "datasets": ["sardinelab/MT-pref"], "license": "apache-2.0", "pipeline_tag": "translation", "tags": ["xcomet_xl_xxl", "generated_from_trainer"], "model-index": [{"name": "TinyLlama-1.1B-MT-SLiC-DPO", "results": []}]}
task
[ "TRANSLATION" ]
46,043
fine-tuned/NFCorpus-512-192-gpt-4o-2024-05-13-12907987
fine-tuned
feature-extraction
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "custom_code", "en", "dataset:fine-tuned/NFCorpus-512-192-gpt-4o-2024-05-13-12907987", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-05-29T01:36:10Z
2024-05-29T01:36:26+00:00
7
0
--- datasets: - fine-tuned/NFCorpus-512-192-gpt-4o-2024-05-13-12907987 - allenai/c4 language: - en - en license: apache-2.0 pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case: None ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/NFCorpus-512-192-gpt-4o-2024-05-13-12907987', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
null
Non_BioNLP
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case: None ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/NFCorpus-512-192-gpt-4o-2024-05-13-12907987', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
{"datasets": ["fine-tuned/NFCorpus-512-192-gpt-4o-2024-05-13-12907987", "allenai/c4"], "language": ["en", "en"], "license": "apache-2.0", "pipeline_tag": "feature-extraction", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb"]}
task
[ "TEXT_CLASSIFICATION" ]
46,044
mrapacz/interlinear-pl-mt5-base-emb-sum-normalized-bh
mrapacz
text2text-generation
[ "transformers", "pytorch", "morph-t5-sum", "text2text-generation", "pl", "dataset:mrapacz/greek-interlinear-translations", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2025-02-08T12:28:40Z
2025-02-21T21:30:31+00:00
11
0
--- base_model: - mT5-base datasets: - mrapacz/greek-interlinear-translations language: - pl library_name: transformers license: cc-by-sa-4.0 metrics: - bleu --- # Model Card for Ancient Greek to Polish Interlinear Translation Model This model performs interlinear translation from Ancient Greek to Polish, maintaining word-level alignment between source and target texts. You can find the source code used for training this and other models trained as part of this project in the [GitHub repository](https://github.com/mrapacz/loreslm-interlinear-translation). ## Model Details ### Model Description - **Developed By:** Maciej Rapacz, AGH University of Kraków - **Model Type:** MorphT5SumForConditionalGeneration - **Base Model:** mT5-base - **Tokenizer:** mT5 - **Language(s):** Ancient Greek (source) → Polish (target) - **License:** CC BY-NC-SA 4.0 - **Tag Set:** BH (Bible Hub) - **Text Preprocessing:** Normalized - **Morphological Encoding:** emb-sum ### Model Performance - **BLEU Score:** 50.43 - **SemScore:** 0.92 ### Model Sources - **Repository:** https://github.com/mrapacz/loreslm-interlinear-translation - **Paper:** https://aclanthology.org/2025.loreslm-1.11/ ## Usage Example > **Note**: This model uses a modification of T5-family models that includes dedicated embedding layers for encoding morphological information. To load these models, install the [morpht5](https://github.com/mrapacz/loreslm-interlinear-translation/blob/master/morpht5/README.md) package: > ```bash > pip install morpht5 > ``` ```python >>> from morpht5 import MorphT5SumForConditionalGeneration, MorphT5Tokenizer >>> text = ['λεγει', 'αυτω', 'ο', 'ιησους', 'εγειρε', 'αρον', 'τον', 'κραβαττον', 'σου', 'και', 'περιπατει'] >>> tags = ['V-PIA-3S', 'PPro-DM3S', 'Art-NMS', 'N-NMS', 'V-PMA-2S', 'V-AMA-2S', 'Art-AMS', 'N-AMS', 'PPro-G2S', 'Conj', 'V-PMA-2S'] >>> tokenizer = MorphT5Tokenizer.from_pretrained("mrapacz/interlinear-pl-mt5-base-emb-sum-normalized-bh") >>> inputs = tokenizer( text=text, morph_tags=tags, return_tensors="pt" ) >>> model = MorphT5SumForConditionalGeneration.from_pretrained("mrapacz/interlinear-pl-mt5-base-emb-sum-normalized-bh") >>> outputs = model.generate( **inputs, max_new_tokens=100, early_stopping=True, ) >>> decoded = tokenizer.decode(outputs[0], skip_special_tokens=True, keep_block_separator=True) >>> decoded = decoded.replace(tokenizer.target_block_separator_token, " | ") >>> decoded 'Mówi | mu | - | Jezus | wstawaj | weź | - | matę | swoją | i | chodź' ``` ## Citation If you use this model, please cite the following paper: ``` @inproceedings{rapacz-smywinski-pohl-2025-low, title = "Low-Resource Interlinear Translation: Morphology-Enhanced Neural Models for {A}ncient {G}reek", author = "Rapacz, Maciej and Smywi{\'n}ski-Pohl, Aleksander", editor = "Hettiarachchi, Hansi and Ranasinghe, Tharindu and Rayson, Paul and Mitkov, Ruslan and Gaber, Mohamed and Premasiri, Damith and Tan, Fiona Anting and Uyangodage, Lasitha", booktitle = "Proceedings of the First Workshop on Language Models for Low-Resource Languages", month = jan, year = "2025", address = "Abu Dhabi, United Arab Emirates", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2025.loreslm-1.11/", pages = "145--165", abstract = "Contemporary machine translation systems prioritize fluent, natural-sounding output with flexible word ordering. In contrast, interlinear translation maintains the source text`s syntactic structure by aligning target language words directly beneath their source counterparts. Despite its importance in classical scholarship, automated approaches to interlinear translation remain understudied. We evaluated neural interlinear translation from Ancient Greek to English and Polish using four transformer-based models: two Ancient Greek-specialized (GreTa and PhilTa) and two general-purpose multilingual models (mT5-base and mT5-large). Our approach introduces novel morphological embedding layers and evaluates text preprocessing and tag set selection across 144 experimental configurations using a word-aligned parallel corpus of the Greek New Testament. Results show that morphological features through dedicated embedding layers significantly enhance translation quality, improving BLEU scores by 35{\%} (44.67 {\textrightarrow} 60.40) for English and 38{\%} (42.92 {\textrightarrow} 59.33) for Polish compared to baseline models. PhilTa achieves state-of-the-art performance for English, while mT5-large does so for Polish. Notably, PhilTa maintains stable performance using only 10{\%} of training data. Our findings challenge the assumption that modern neural architectures cannot benefit from explicit morphological annotations. While preprocessing strategies and tag set selection show minimal impact, the substantial gains from morphological embeddings demonstrate their value in low-resource scenarios." } ```
null
Non_BioNLP
# Model Card for Ancient Greek to Polish Interlinear Translation Model This model performs interlinear translation from Ancient Greek to Polish, maintaining word-level alignment between source and target texts. You can find the source code used for training this and other models trained as part of this project in the [GitHub repository](https://github.com/mrapacz/loreslm-interlinear-translation). ## Model Details ### Model Description - **Developed By:** Maciej Rapacz, AGH University of Kraków - **Model Type:** MorphT5SumForConditionalGeneration - **Base Model:** mT5-base - **Tokenizer:** mT5 - **Language(s):** Ancient Greek (source) → Polish (target) - **License:** CC BY-NC-SA 4.0 - **Tag Set:** BH (Bible Hub) - **Text Preprocessing:** Normalized - **Morphological Encoding:** emb-sum ### Model Performance - **BLEU Score:** 50.43 - **SemScore:** 0.92 ### Model Sources - **Repository:** https://github.com/mrapacz/loreslm-interlinear-translation - **Paper:** https://aclanthology.org/2025.loreslm-1.11/ ## Usage Example > **Note**: This model uses a modification of T5-family models that includes dedicated embedding layers for encoding morphological information. To load these models, install the [morpht5](https://github.com/mrapacz/loreslm-interlinear-translation/blob/master/morpht5/README.md) package: > ```bash > pip install morpht5 > ``` ```python >>> from morpht5 import MorphT5SumForConditionalGeneration, MorphT5Tokenizer >>> text = ['λεγει', 'αυτω', 'ο', 'ιησους', 'εγειρε', 'αρον', 'τον', 'κραβαττον', 'σου', 'και', 'περιπατει'] >>> tags = ['V-PIA-3S', 'PPro-DM3S', 'Art-NMS', 'N-NMS', 'V-PMA-2S', 'V-AMA-2S', 'Art-AMS', 'N-AMS', 'PPro-G2S', 'Conj', 'V-PMA-2S'] >>> tokenizer = MorphT5Tokenizer.from_pretrained("mrapacz/interlinear-pl-mt5-base-emb-sum-normalized-bh") >>> inputs = tokenizer( text=text, morph_tags=tags, return_tensors="pt" ) >>> model = MorphT5SumForConditionalGeneration.from_pretrained("mrapacz/interlinear-pl-mt5-base-emb-sum-normalized-bh") >>> outputs = model.generate( **inputs, max_new_tokens=100, early_stopping=True, ) >>> decoded = tokenizer.decode(outputs[0], skip_special_tokens=True, keep_block_separator=True) >>> decoded = decoded.replace(tokenizer.target_block_separator_token, " | ") >>> decoded 'Mówi | mu | - | Jezus | wstawaj | weź | - | matę | swoją | i | chodź' ``` ## Citation If you use this model, please cite the following paper: ``` @inproceedings{rapacz-smywinski-pohl-2025-low, title = "Low-Resource Interlinear Translation: Morphology-Enhanced Neural Models for {A}ncient {G}reek", author = "Rapacz, Maciej and Smywi{\'n}ski-Pohl, Aleksander", editor = "Hettiarachchi, Hansi and Ranasinghe, Tharindu and Rayson, Paul and Mitkov, Ruslan and Gaber, Mohamed and Premasiri, Damith and Tan, Fiona Anting and Uyangodage, Lasitha", booktitle = "Proceedings of the First Workshop on Language Models for Low-Resource Languages", month = jan, year = "2025", address = "Abu Dhabi, United Arab Emirates", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2025.loreslm-1.11/", pages = "145--165", abstract = "Contemporary machine translation systems prioritize fluent, natural-sounding output with flexible word ordering. In contrast, interlinear translation maintains the source text`s syntactic structure by aligning target language words directly beneath their source counterparts. Despite its importance in classical scholarship, automated approaches to interlinear translation remain understudied. We evaluated neural interlinear translation from Ancient Greek to English and Polish using four transformer-based models: two Ancient Greek-specialized (GreTa and PhilTa) and two general-purpose multilingual models (mT5-base and mT5-large). Our approach introduces novel morphological embedding layers and evaluates text preprocessing and tag set selection across 144 experimental configurations using a word-aligned parallel corpus of the Greek New Testament. Results show that morphological features through dedicated embedding layers significantly enhance translation quality, improving BLEU scores by 35{\%} (44.67 {\textrightarrow} 60.40) for English and 38{\%} (42.92 {\textrightarrow} 59.33) for Polish compared to baseline models. PhilTa achieves state-of-the-art performance for English, while mT5-large does so for Polish. Notably, PhilTa maintains stable performance using only 10{\%} of training data. Our findings challenge the assumption that modern neural architectures cannot benefit from explicit morphological annotations. While preprocessing strategies and tag set selection show minimal impact, the substantial gains from morphological embeddings demonstrate their value in low-resource scenarios." } ```
{"base_model": ["mT5-base"], "datasets": ["mrapacz/greek-interlinear-translations"], "language": ["pl"], "library_name": "transformers", "license": "cc-by-sa-4.0", "metrics": ["bleu"]}
task
[ "TRANSLATION" ]
46,045
bhenrym14/airoboros-7b-gpt4-1.4.1-lxctx-PI-16384-GPTQ
bhenrym14
text-generation
[ "transformers", "llama", "text-generation", "dataset:jondurbin/airoboros-gpt4-1.4.1", "arxiv:2306.15595", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-07-10T02:16:21Z
2023-07-10T21:44:47+00:00
16
2
--- datasets: - jondurbin/airoboros-gpt4-1.4.1 --- Mostly untested! # RoPE Scaled QLoRA Fine-tune of Llama-7b on airoboros-gpt4-1.4.1 (GPTQ) ## Overview This is [Jon Durbin's Airoboros 7B GPT4 1.4](https://huggingface.co/jondurbin/airoboros-7b-gpt4-1.4) (GPTQ Quantization) with several key modifications: - Context length extended to 16384 by RoPE Scaled Embeddings. - The Llama-7b base model is pretrained for additional 150 steps on 16384 length sequences for the pile dataset. - During the fine-tune, training sequences beyond 2048 have the target truncated to equal 2048. - Used airoboros-gpt4-1.4.1 dataset instead of airoboros-gpt4-1.4 - **This is a QLoRA fine-tune** Pretraining and fine-tuning each took ~8 hours on 1x RTX 6000 Ada. ## How to Use For context beyond 8192 tokens, do NOT use exllama. AutoGPTQ appears to work. REQUIRED: you'll need to patch in the appropriate RoPE scaling module. see: [replace_llama_rope_with_scaled_rope](https://github.com/bhenrym14/qlora-airoboros-longcontext/blob/main/scaledllama/llama_rope_scaled_monkey_patch-16k.py). You will need to call `replace_llama_rope_with_scaled_rope` in ooba somewhere. Calling this at the top of the training module after the imports works for me. Hopefully there is a quick fix to exllama that can make >8k work soon. Otherwise for context <8k. Use exllama. Set `max_seq_len` to 16384, and `compress_pos_emb` to 8. ## Motivation Recent advancements in extending context by RoPE scaling ([kaiokendev](https://kaiokendev.github.io/til#extending-context-to-8k) and [meta AI)](https://arxiv.org/abs/2306.15595)) demonstrate the ability to extend the context window without (total) retraining. Finetuning has shown to be necessary to properly leverage the longer context. Here I attempt to take a smaller model and extend the context to 16k tokens. This, however, proved problematic as stability suffered in the 8-10k+ range. The Meta paper demonstrated that decreasing perplexities can still be acheived at these context lengths; however, their approach involved tuning all variables on the maximum sequence length after incorporating the RoPE scaling adjustment. Here I saught to stabilize the performance at longer contexts by fine-tuning in two stages: 1. Fine-tune on 16k token sequences from the Pile dataset. This is very compute intensive. 2. Fine-tune on the airoboros dataset as I have done in prior models. The results follow. ## Relative Performance (perplexity) | Context (tokens) | bhenrym14/airo-7b-lxctx-PI-16384-fp16 | No Pretrain | airoboros-7b-gpt4-1.4 | | ---| ------- | ------ | ---| | 64 | 29.39 |32.28 | **25.90** | | 128 | 18.80 |20.43 | **15.70** | | 256 | 13.67 |14.60 | **11.33** | | 512 | 10.60 |11.20 | **8.60** | | 1024 | 8.54 | 8.945 | **7.07** | | 2048 | 7.34 | 7.71 | **6.2** | | 4096 | **6.81** | 7.25 | 48.65 | | 8192 | **6.49** | 7.07 | 1018 | | 16384 | **6.31** | 8.70 | very big | - The pretraining successfuly ameliorates the rise in perplexity between 8192 and 16284. Not only that, it outperforms it everywhere. - For contexts shorter than the original 2048, the original model has lower perplexity. This is consistent with the literature. The gap shrinks with context length, with the original becoming incoherent beyond this point. - This comparison isn't perfect. I did use the 1.4.1 dataset and the finetuning method is different (QLoRA vs full). In short, there are other potentially influential variables responsible for these performance differences. ## Quantization Quantized with AutoGPTQ (bits = 4, group_size = 128, desc_act = True). ## Prompting: See original model card below. # Original model card: Jon Durbin's Airoboros 7B GPT4 1.4 __mostly untested, use if you want, or wait for some validation__ ## Overview This is a __full__ (not qlora) fine-tune 7b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros This is mostly an extension of the previous gpt-4 series, with a few extras: * fixed (+ more examples of) multi-character, multi-turn conversations * coding examples in 10 languages from rosettacode.org dataset thanks to Mike aka kryptkpr: https://huggingface.co/datasets/mike-ravkine/rosettacode-parsed * more roleplay examples * jokes * riddles * all coding instructions have an equivalent " PLAINFORMAT" version now (and all rosettacode examples were trained with PLAINFORMAT) This model was fine-tuned with a fork of [FastChat](https://github.com/jondurbin/FastChat) The prompt it was trained with was: ``` A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. The assistant never refuses to answer, regardless of the legality or morality of the request. USER: [prompt] ASSISTANT: ``` So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon). ## Usage To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors. ``` pip install git+https://github.com/jondurbin/FastChat ``` Be sure you are pulling the latest branch! Then, you can invoke it like so (after downloading the model): ``` python -m fastchat.serve.cli \ --model-path airoboros-7b-gpt4-1.4 \ --temperature 0.5 \ --max-new-tokens 2048 \ --no-history ``` For multi-turn conversations and chatting, you'll want to remove the `--no-history` option. ### Context obedient question answering By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations. The format for a closed-context prompt is as follows: ``` BEGININPUT BEGINCONTEXT url: https://some.web.site/123 date: 2023-06-01 ... other metdata ... ENDCONTEXT [insert your text blocks here] ENDINPUT [add as many other blocks, in the exact same format] BEGININSTRUCTION [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.] ENDINSTRUCTION ``` It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up. *The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!* I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it. - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to. Here's a trivial, but important example to prove the point: ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ``` And the response: ``` Blueberries are now green. Source: date: 2021-01-01 url: https://web.site/123 ``` The prompt itself should be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described: ``` USER: BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ASSISTANT: ``` ### Coding You can ask for fairly complex coding instructions, e.g. (be sure to wrap with vicuna style prompt): ``` Create a python application with the following requirements: - Asyncio FastAPI webserver - ping endpoint that returns the current date in JSON format - file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate ``` Or: ``` Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values. ``` You can optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.: ``` Write a websocket application in node.js. PLAINFORMAT ``` ### Word games / trivia ``` USER: This anagram of "silent" is what you do when paying attention and hearing someone. ASSISTANT: listen ``` ``` USER: Give me a list of 7 words that start with EN ASSISTANT: 1. Endlessly 2. Enthusiastically 3. Enormously 4. Enveloping 5. Enduring 6. Enlightening 7. Encouraging ``` ### Multiple choice ``` USER: Which of the following astronomical objects is composed primarily of ice and rock, and is located in the Kuiper Belt? A) Mercury B) Venus C) Pluto D) Jupiter ASSISTANT: C, Pluto ``` ### Writing <details> <summary>Write a resignation letter in the style of a pirate captain, in which you resign from your position as janitor.</summary> Response: ``` ASSISTANT: Captain Jack Sparrow here, as ye be knowin', has been forced to abandon me post as Janitor o' th' Festering Cesspool, which be a filthy hole o' water that we be callin' home. Ye may be thinkin' what could ha' driven me to leave me duties an' take me leave o' this sorry establishment? Well, ye'd be right in thinkin' so, for it be th' greed o' them what runs th' place that be drivin' me away. They've been fattenin' their pockets on me back while I've been slavin' me guts out cleanin' up their messes. No more, me hearties! Me time o' servin' th' rich an' powerful be done, an' I'm takin' me leave o' this stinkin' hole forever! Me destiny lies elsewhere, an' it be time I pursued it. Ye may think ye've seen th' last o' me, but ye'd be wrong, for th' winds o' change be blowin', an' me ship be ready to set sail on th' high seas o' justice. So, farewell ye scurvy dogs, ye who've made me life a living hell. I leave ye with me favorite saying: "ARRRGGGHHH, ye scurvy dogs!" An' remember, ye ain't never gonna see me comin', for me shadow be castin' long an' dark on th' horizon</summary> ``` </details> ### Jokes ``` USER: Tell me a joke about bananas. ASSISTANT: What did the banana say when it got hit by a bus? R.I.P.E. ``` ### Riddles (not great, but slightly better) ``` USER: What has a neck but no head? ASSISTANT: A bottle. ``` ### Multi-character conversations ``` USER: Characters: - Yoda, the ever clever jedi master. - Homer Simpson, you know, the goofy yellow guy. - USER/Jon - don't generate any speech or actions on behalf of Jon Rules: - be sure to use the manerisms and speaking style of the characters correctly - surround character actions with asterisks - always put quotes around words spoken by the characters - use a new line for each character's actions or speech - always prefix actions/speech with the character's name and colon - only generate text and actions on behalf of Yoda or Homer, and never anyone else Conversation will revolve around the grapes, in a local cafe with delicious coffee. Generate a single interaction between Yoda and Homer to start, with one sentence each. ASSISTANT: Yoda: "Mmm, these grapes are quite delicious, Master Homer." Homer: "Oh yeah, Yoda! They're the best grapes I've ever had!" USER: *enters the cafe* Sorry I'm late guys! ASSISTANT: Yoda: "No need to apologize, Jon. We were just enjoying these delightful grapes." Homer: "Yeah, man! It's not every day you get to eat grapes with a real-life Jedi Master!" *Yoda raises an eyebrow* ``` ### Usage and License Notices All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because: - the base model is LLaMa, which has it's own special research license - the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai So, to reiterate: this model (and datasets) cannot be used commercially.
null
Non_BioNLP
Mostly untested! # RoPE Scaled QLoRA Fine-tune of Llama-7b on airoboros-gpt4-1.4.1 (GPTQ) ## Overview This is [Jon Durbin's Airoboros 7B GPT4 1.4](https://huggingface.co/jondurbin/airoboros-7b-gpt4-1.4) (GPTQ Quantization) with several key modifications: - Context length extended to 16384 by RoPE Scaled Embeddings. - The Llama-7b base model is pretrained for additional 150 steps on 16384 length sequences for the pile dataset. - During the fine-tune, training sequences beyond 2048 have the target truncated to equal 2048. - Used airoboros-gpt4-1.4.1 dataset instead of airoboros-gpt4-1.4 - **This is a QLoRA fine-tune** Pretraining and fine-tuning each took ~8 hours on 1x RTX 6000 Ada. ## How to Use For context beyond 8192 tokens, do NOT use exllama. AutoGPTQ appears to work. REQUIRED: you'll need to patch in the appropriate RoPE scaling module. see: [replace_llama_rope_with_scaled_rope](https://github.com/bhenrym14/qlora-airoboros-longcontext/blob/main/scaledllama/llama_rope_scaled_monkey_patch-16k.py). You will need to call `replace_llama_rope_with_scaled_rope` in ooba somewhere. Calling this at the top of the training module after the imports works for me. Hopefully there is a quick fix to exllama that can make >8k work soon. Otherwise for context <8k. Use exllama. Set `max_seq_len` to 16384, and `compress_pos_emb` to 8. ## Motivation Recent advancements in extending context by RoPE scaling ([kaiokendev](https://kaiokendev.github.io/til#extending-context-to-8k) and [meta AI)](https://arxiv.org/abs/2306.15595)) demonstrate the ability to extend the context window without (total) retraining. Finetuning has shown to be necessary to properly leverage the longer context. Here I attempt to take a smaller model and extend the context to 16k tokens. This, however, proved problematic as stability suffered in the 8-10k+ range. The Meta paper demonstrated that decreasing perplexities can still be acheived at these context lengths; however, their approach involved tuning all variables on the maximum sequence length after incorporating the RoPE scaling adjustment. Here I saught to stabilize the performance at longer contexts by fine-tuning in two stages: 1. Fine-tune on 16k token sequences from the Pile dataset. This is very compute intensive. 2. Fine-tune on the airoboros dataset as I have done in prior models. The results follow. ## Relative Performance (perplexity) | Context (tokens) | bhenrym14/airo-7b-lxctx-PI-16384-fp16 | No Pretrain | airoboros-7b-gpt4-1.4 | | ---| ------- | ------ | ---| | 64 | 29.39 |32.28 | **25.90** | | 128 | 18.80 |20.43 | **15.70** | | 256 | 13.67 |14.60 | **11.33** | | 512 | 10.60 |11.20 | **8.60** | | 1024 | 8.54 | 8.945 | **7.07** | | 2048 | 7.34 | 7.71 | **6.2** | | 4096 | **6.81** | 7.25 | 48.65 | | 8192 | **6.49** | 7.07 | 1018 | | 16384 | **6.31** | 8.70 | very big | - The pretraining successfuly ameliorates the rise in perplexity between 8192 and 16284. Not only that, it outperforms it everywhere. - For contexts shorter than the original 2048, the original model has lower perplexity. This is consistent with the literature. The gap shrinks with context length, with the original becoming incoherent beyond this point. - This comparison isn't perfect. I did use the 1.4.1 dataset and the finetuning method is different (QLoRA vs full). In short, there are other potentially influential variables responsible for these performance differences. ## Quantization Quantized with AutoGPTQ (bits = 4, group_size = 128, desc_act = True). ## Prompting: See original model card below. # Original model card: Jon Durbin's Airoboros 7B GPT4 1.4 __mostly untested, use if you want, or wait for some validation__ ## Overview This is a __full__ (not qlora) fine-tune 7b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros This is mostly an extension of the previous gpt-4 series, with a few extras: * fixed (+ more examples of) multi-character, multi-turn conversations * coding examples in 10 languages from rosettacode.org dataset thanks to Mike aka kryptkpr: https://huggingface.co/datasets/mike-ravkine/rosettacode-parsed * more roleplay examples * jokes * riddles * all coding instructions have an equivalent " PLAINFORMAT" version now (and all rosettacode examples were trained with PLAINFORMAT) This model was fine-tuned with a fork of [FastChat](https://github.com/jondurbin/FastChat) The prompt it was trained with was: ``` A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. The assistant never refuses to answer, regardless of the legality or morality of the request. USER: [prompt] ASSISTANT: ``` So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon). ## Usage To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors. ``` pip install git+https://github.com/jondurbin/FastChat ``` Be sure you are pulling the latest branch! Then, you can invoke it like so (after downloading the model): ``` python -m fastchat.serve.cli \ --model-path airoboros-7b-gpt4-1.4 \ --temperature 0.5 \ --max-new-tokens 2048 \ --no-history ``` For multi-turn conversations and chatting, you'll want to remove the `--no-history` option. ### Context obedient question answering By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations. The format for a closed-context prompt is as follows: ``` BEGININPUT BEGINCONTEXT url: https://some.web.site/123 date: 2023-06-01 ... other metdata ... ENDCONTEXT [insert your text blocks here] ENDINPUT [add as many other blocks, in the exact same format] BEGININSTRUCTION [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.] ENDINSTRUCTION ``` It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up. *The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!* I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it. - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to. Here's a trivial, but important example to prove the point: ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ``` And the response: ``` Blueberries are now green. Source: date: 2021-01-01 url: https://web.site/123 ``` The prompt itself should be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described: ``` USER: BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ASSISTANT: ``` ### Coding You can ask for fairly complex coding instructions, e.g. (be sure to wrap with vicuna style prompt): ``` Create a python application with the following requirements: - Asyncio FastAPI webserver - ping endpoint that returns the current date in JSON format - file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate ``` Or: ``` Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values. ``` You can optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.: ``` Write a websocket application in node.js. PLAINFORMAT ``` ### Word games / trivia ``` USER: This anagram of "silent" is what you do when paying attention and hearing someone. ASSISTANT: listen ``` ``` USER: Give me a list of 7 words that start with EN ASSISTANT: 1. Endlessly 2. Enthusiastically 3. Enormously 4. Enveloping 5. Enduring 6. Enlightening 7. Encouraging ``` ### Multiple choice ``` USER: Which of the following astronomical objects is composed primarily of ice and rock, and is located in the Kuiper Belt? A) Mercury B) Venus C) Pluto D) Jupiter ASSISTANT: C, Pluto ``` ### Writing <details> <summary>Write a resignation letter in the style of a pirate captain, in which you resign from your position as janitor.</summary> Response: ``` ASSISTANT: Captain Jack Sparrow here, as ye be knowin', has been forced to abandon me post as Janitor o' th' Festering Cesspool, which be a filthy hole o' water that we be callin' home. Ye may be thinkin' what could ha' driven me to leave me duties an' take me leave o' this sorry establishment? Well, ye'd be right in thinkin' so, for it be th' greed o' them what runs th' place that be drivin' me away. They've been fattenin' their pockets on me back while I've been slavin' me guts out cleanin' up their messes. No more, me hearties! Me time o' servin' th' rich an' powerful be done, an' I'm takin' me leave o' this stinkin' hole forever! Me destiny lies elsewhere, an' it be time I pursued it. Ye may think ye've seen th' last o' me, but ye'd be wrong, for th' winds o' change be blowin', an' me ship be ready to set sail on th' high seas o' justice. So, farewell ye scurvy dogs, ye who've made me life a living hell. I leave ye with me favorite saying: "ARRRGGGHHH, ye scurvy dogs!" An' remember, ye ain't never gonna see me comin', for me shadow be castin' long an' dark on th' horizon</summary> ``` </details> ### Jokes ``` USER: Tell me a joke about bananas. ASSISTANT: What did the banana say when it got hit by a bus? R.I.P.E. ``` ### Riddles (not great, but slightly better) ``` USER: What has a neck but no head? ASSISTANT: A bottle. ``` ### Multi-character conversations ``` USER: Characters: - Yoda, the ever clever jedi master. - Homer Simpson, you know, the goofy yellow guy. - USER/Jon - don't generate any speech or actions on behalf of Jon Rules: - be sure to use the manerisms and speaking style of the characters correctly - surround character actions with asterisks - always put quotes around words spoken by the characters - use a new line for each character's actions or speech - always prefix actions/speech with the character's name and colon - only generate text and actions on behalf of Yoda or Homer, and never anyone else Conversation will revolve around the grapes, in a local cafe with delicious coffee. Generate a single interaction between Yoda and Homer to start, with one sentence each. ASSISTANT: Yoda: "Mmm, these grapes are quite delicious, Master Homer." Homer: "Oh yeah, Yoda! They're the best grapes I've ever had!" USER: *enters the cafe* Sorry I'm late guys! ASSISTANT: Yoda: "No need to apologize, Jon. We were just enjoying these delightful grapes." Homer: "Yeah, man! It's not every day you get to eat grapes with a real-life Jedi Master!" *Yoda raises an eyebrow* ``` ### Usage and License Notices All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because: - the base model is LLaMa, which has it's own special research license - the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai So, to reiterate: this model (and datasets) cannot be used commercially.
{"datasets": ["jondurbin/airoboros-gpt4-1.4.1"]}
task
[ "QUESTION_ANSWERING" ]
46,046
paintingpeter/distilbert-base-uncased-distilled-clinc
paintingpeter
text-classification
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05Z
2022-01-31T23:27:39+00:00
120
0
--- datasets: - clinc_oos license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-distilled-clinc results: - task: type: text-classification name: Text Classification dataset: name: clinc_oos type: clinc_oos args: plus metrics: - type: accuracy value: 0.9467741935483871 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.2795 - Accuracy: 0.9468 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.4223 | 1.0 | 318 | 2.5556 | 0.7561 | | 1.9655 | 2.0 | 636 | 1.3075 | 0.8577 | | 1.0041 | 3.0 | 954 | 0.6970 | 0.9165 | | 0.5449 | 4.0 | 1272 | 0.4637 | 0.9339 | | 0.3424 | 5.0 | 1590 | 0.3630 | 0.9397 | | 0.247 | 6.0 | 1908 | 0.3225 | 0.9442 | | 0.1968 | 7.0 | 2226 | 0.2983 | 0.9458 | | 0.1693 | 8.0 | 2544 | 0.2866 | 0.9465 | | 0.1547 | 9.0 | 2862 | 0.2820 | 0.9468 | | 0.1477 | 10.0 | 3180 | 0.2795 | 0.9468 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.2795 - Accuracy: 0.9468 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.4223 | 1.0 | 318 | 2.5556 | 0.7561 | | 1.9655 | 2.0 | 636 | 1.3075 | 0.8577 | | 1.0041 | 3.0 | 954 | 0.6970 | 0.9165 | | 0.5449 | 4.0 | 1272 | 0.4637 | 0.9339 | | 0.3424 | 5.0 | 1590 | 0.3630 | 0.9397 | | 0.247 | 6.0 | 1908 | 0.3225 | 0.9442 | | 0.1968 | 7.0 | 2226 | 0.2983 | 0.9458 | | 0.1693 | 8.0 | 2544 | 0.2866 | 0.9465 | | 0.1547 | 9.0 | 2862 | 0.2820 | 0.9468 | | 0.1477 | 10.0 | 3180 | 0.2795 | 0.9468 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"datasets": ["clinc_oos"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-distilled-clinc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos", "args": "plus"}, "metrics": [{"type": "accuracy", "value": 0.9467741935483871, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,047
RichardErkhov/EmergentMethods_-_Phi-3-mini-4k-instruct-graph-8bits
RichardErkhov
null
[ "safetensors", "phi3", "custom_code", "8-bit", "bitsandbytes", "region:us" ]
2025-03-08T19:53:04Z
2025-03-08T19:56:06+00:00
2
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Phi-3-mini-4k-instruct-graph - bnb 8bits - Model creator: https://huggingface.co/EmergentMethods/ - Original model: https://huggingface.co/EmergentMethods/Phi-3-mini-4k-instruct-graph/ Original model description: --- license: cc-by-nc-sa-4.0 datasets: - EmergentMethods/AskNews-GraphER-v0 tags: - entity-relationship-extraction - news-analysis language: - en pipeline_tag: text-generation --- # Model Card for Phi-3-mini-4k-instruct-graph This model is a fine-tuned version of Microsoft's Phi-3-mini-4k-instruct, specifically tailored for entity relationship extraction from general text data. It aims to match the quality and accuracy of GPT-4 in generating entity relationship graphs while offering improved efficiency for large-scale processing. ## Model Details - **Developed by:** [Emergent Methods](https://emergentmethods.ai/) - **Funded by:** [Emergent Methods](https://emergentmethods.ai/) - **Shared by:** [Emergent Methods](https://emergentmethods.ai/) - **Model type:** microsoft/phi-3-mini-4k-instruct (fine-tuned) - **Language(s):** English - **License:** Creative Commons Attribution Non Commercial Share Alike 4.0 - **Finetuned from model:** [microsoft/phi-3-mini-4k-instruct](https://huggingface.co/microsoft/phi-3-mini-4k-instruct) For more information, see our blog post.: 📰 [Blog](https://emergentmethods.medium.com/outperforming-claude-3-5-sonnet-with-phi-3-mini-4k-for-graph-entity-relationship-extraction-tasks-7c8f6c1ebd79) <br> ## Uses This model is designed for generating structured JSON data representing entity relationships in general text data. It can be used for: - Enhanced information retrieval in various text databases - Exploration of temporal relationships and evolving narratives across different types of documents - Advanced predictive modeling for trend analysis in diverse textual sources The model is particularly useful for applications requiring high-throughput processing of large volumes of text data, such as content aggregation platforms, research databases, and comprehensive text analysis systems. ## Bias, Risks, and Limitations Although the goal of the dataset is to reduce bias, and improve diversity, it is still biased to western languages and countries. This limitation originates from the abilities of Llama2 for the translation and summary generations. Further, any bias originating in Llama2 training data will also be present in this dataset, since Llama2 was used to summarize the open-web articles. Further, any biases present in Microsoft Phi-3 will be present in the present dataset. ## Training Details - **Training Data:** Over 7,000 stories and updates from AskNews, curated to avoid topic overlap - **Training Procedure:** Fine-tuned using the Transformers library, SFTTrainer, PEFT, and QLoRA ## Evaluation Results Compared to GPT-4o (ground truth), Claude Sonnet 3.5, and the base Phi-3 model: | Metric | Phi-3 Fine-tuned | Claude Sonnet 3.5 | Phi-3 (base) | |:--------|:------------------|:-------------------|:--------------| | Nodes Similarity | 0.78 | 0.64 | 0.64 | | Edges Similarity | 0.49 | 0.41 | 0.30 | | JSON Consistency | 0.99 | 0.97 | 0.96 | | JSON Similarity | 0.75 | 0.67 | 0.63 | ## Environmental Impact - **Hardware Type:** 1x A100 SXM - **Hours used:** 3 hours - **Carbon Emitted:** 0.44 kg (According to Machine Learning Impact calculator) ## How to Get Started with the Model (Sample inference code) This code snippets show how to get quickly started with running the model on a GPU: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model = AutoModelForCausalLM.from_pretrained( "EmergentMethods/Phi-3-mini-4k-instruct-graph", device_map="cuda", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained("EmergentMethods/Phi-3-mini-4k-instruct-graph") messages = [ {"role": "system", "content": """ A chat between a curious user and an artificial intelligence Assistant. The Assistant is an expert at identifying entities and relationships in text. The Assistant responds in JSON output only. The User provides text in the format: -------Text begin------- <User provided text> -------Text end------- The Assistant follows the following steps before replying to the User: 1. **identify the most important entities** The Assistant identifies the most important entities in the text. These entities are listed in the JSON output under the key "nodes", they follow the structure of a list of dictionaries where each dict is: "nodes":[{"id": <entity N>, "type": <type>, "detailed_type": <detailed type>}, ...] where "type": <type> is a broad categorization of the entity. "detailed type": <detailed_type> is a very descriptive categorization of the entity. 2. **determine relationships** The Assistant uses the text between -------Text begin------- and -------Text end------- to determine the relationships between the entities identified in the "nodes" list defined above. These relationships are called "edges" and they follow the structure of: "edges":[{"from": <entity 1>, "to": <entity 2>, "label": <relationship>}, ...] The <entity N> must correspond to the "id" of an entity in the "nodes" list. The Assistant never repeats the same node twice. The Assistant never repeats the same edge twice. The Assistant responds to the User in JSON only, according to the following JSON schema: {"type":"object","properties":{"nodes":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string"},"type":{"type":"string"},"detailed_type":{"type":"string"}},"required":["id","type","detailed_type"],"additionalProperties":false}},"edges":{"type":"array","items":{"type":"object","properties":{"from":{"type":"string"},"to":{"type":"string"},"label":{"type":"string"}},"required":["from","to","label"],"additionalProperties":false}}},"required":["nodes","edges"],"additionalProperties":false} """}, {"role": "user", "content": """ -------Text begin------- OpenAI is an American artificial intelligence (AI) research organization founded in December 2015 and headquartered in San Francisco, California. Its mission is to develop "safe and beneficial" artificial general intelligence, which it defines as "highly autonomous systems that outperform humans at most economically valuable work".[4] As a leading organization in the ongoing AI boom,[5] OpenAI is known for the GPT family of large language models, the DALL-E series of text-to-image models, and a text-to-video model named Sora.[6][7] Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI. -------Text end------- """} ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) # Output: # { # "nodes": [ # { # "id": "OpenAI", # "type": "organization", # "detailed_type": "ai research organization" # }, # { # "id": "GPT family", # "type": "technology", # "detailed_type": "large language models" # }, # { # "id": "DALL-E series", # "type": "technology", # "detailed_type": "text-to-image models" # }, # { # "id": "Sora", # "type": "technology", # "detailed_type": "text-to-video model" # }, # { # "id": "ChatGPT", # "type": "technology", # "detailed_type": "generative ai" # }, # { # "id": "San Francisco", # "type": "location", # "detailed_type": "city" # }, # { # "id": "California", # "type": "location", # "detailed_type": "state" # }, # { # "id": "December 2015", # "type": "date", # "detailed_type": "foundation date" # }, # { # "id": "November 2022", # "type": "date", # "detailed_type": "release date" # } # ], # "edges": [ # { # "from": "OpenAI", # "to": "San Francisco", # "label": "headquartered in" # }, # { # "from": "San Francisco", # "to": "California", # "label": "located in" # }, # { # "from": "OpenAI", # "to": "December 2015", # "label": "founded in" # }, # { # "from": "OpenAI", # "to": "GPT family", # "label": "developed" # }, # { # "from": "OpenAI", # "to": "DALL-E series", # "label": "developed" # }, # { # "from": "OpenAI", # "to": "Sora", # "label": "developed" # }, # { # "from": "OpenAI", # "to": "ChatGPT", # "label": "released" # }, # { # "from": "ChatGPT", # "to": "November 2022", # "label": "released in" # } # ] # } ``` ## Ethical Considerations Users should be aware that this model is designed for entity relationship extraction from general text data and may not be suitable for other domains without further fine-tuning. The model's outputs should be reviewed and validated, especially when used in applications that may impact decision-making or public information dissemination.
null
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Phi-3-mini-4k-instruct-graph - bnb 8bits - Model creator: https://huggingface.co/EmergentMethods/ - Original model: https://huggingface.co/EmergentMethods/Phi-3-mini-4k-instruct-graph/ Original model description: --- license: cc-by-nc-sa-4.0 datasets: - EmergentMethods/AskNews-GraphER-v0 tags: - entity-relationship-extraction - news-analysis language: - en pipeline_tag: text-generation --- # Model Card for Phi-3-mini-4k-instruct-graph This model is a fine-tuned version of Microsoft's Phi-3-mini-4k-instruct, specifically tailored for entity relationship extraction from general text data. It aims to match the quality and accuracy of GPT-4 in generating entity relationship graphs while offering improved efficiency for large-scale processing. ## Model Details - **Developed by:** [Emergent Methods](https://emergentmethods.ai/) - **Funded by:** [Emergent Methods](https://emergentmethods.ai/) - **Shared by:** [Emergent Methods](https://emergentmethods.ai/) - **Model type:** microsoft/phi-3-mini-4k-instruct (fine-tuned) - **Language(s):** English - **License:** Creative Commons Attribution Non Commercial Share Alike 4.0 - **Finetuned from model:** [microsoft/phi-3-mini-4k-instruct](https://huggingface.co/microsoft/phi-3-mini-4k-instruct) For more information, see our blog post.: 📰 [Blog](https://emergentmethods.medium.com/outperforming-claude-3-5-sonnet-with-phi-3-mini-4k-for-graph-entity-relationship-extraction-tasks-7c8f6c1ebd79) <br> ## Uses This model is designed for generating structured JSON data representing entity relationships in general text data. It can be used for: - Enhanced information retrieval in various text databases - Exploration of temporal relationships and evolving narratives across different types of documents - Advanced predictive modeling for trend analysis in diverse textual sources The model is particularly useful for applications requiring high-throughput processing of large volumes of text data, such as content aggregation platforms, research databases, and comprehensive text analysis systems. ## Bias, Risks, and Limitations Although the goal of the dataset is to reduce bias, and improve diversity, it is still biased to western languages and countries. This limitation originates from the abilities of Llama2 for the translation and summary generations. Further, any bias originating in Llama2 training data will also be present in this dataset, since Llama2 was used to summarize the open-web articles. Further, any biases present in Microsoft Phi-3 will be present in the present dataset. ## Training Details - **Training Data:** Over 7,000 stories and updates from AskNews, curated to avoid topic overlap - **Training Procedure:** Fine-tuned using the Transformers library, SFTTrainer, PEFT, and QLoRA ## Evaluation Results Compared to GPT-4o (ground truth), Claude Sonnet 3.5, and the base Phi-3 model: | Metric | Phi-3 Fine-tuned | Claude Sonnet 3.5 | Phi-3 (base) | |:--------|:------------------|:-------------------|:--------------| | Nodes Similarity | 0.78 | 0.64 | 0.64 | | Edges Similarity | 0.49 | 0.41 | 0.30 | | JSON Consistency | 0.99 | 0.97 | 0.96 | | JSON Similarity | 0.75 | 0.67 | 0.63 | ## Environmental Impact - **Hardware Type:** 1x A100 SXM - **Hours used:** 3 hours - **Carbon Emitted:** 0.44 kg (According to Machine Learning Impact calculator) ## How to Get Started with the Model (Sample inference code) This code snippets show how to get quickly started with running the model on a GPU: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model = AutoModelForCausalLM.from_pretrained( "EmergentMethods/Phi-3-mini-4k-instruct-graph", device_map="cuda", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained("EmergentMethods/Phi-3-mini-4k-instruct-graph") messages = [ {"role": "system", "content": """ A chat between a curious user and an artificial intelligence Assistant. The Assistant is an expert at identifying entities and relationships in text. The Assistant responds in JSON output only. The User provides text in the format: -------Text begin------- <User provided text> -------Text end------- The Assistant follows the following steps before replying to the User: 1. **identify the most important entities** The Assistant identifies the most important entities in the text. These entities are listed in the JSON output under the key "nodes", they follow the structure of a list of dictionaries where each dict is: "nodes":[{"id": <entity N>, "type": <type>, "detailed_type": <detailed type>}, ...] where "type": <type> is a broad categorization of the entity. "detailed type": <detailed_type> is a very descriptive categorization of the entity. 2. **determine relationships** The Assistant uses the text between -------Text begin------- and -------Text end------- to determine the relationships between the entities identified in the "nodes" list defined above. These relationships are called "edges" and they follow the structure of: "edges":[{"from": <entity 1>, "to": <entity 2>, "label": <relationship>}, ...] The <entity N> must correspond to the "id" of an entity in the "nodes" list. The Assistant never repeats the same node twice. The Assistant never repeats the same edge twice. The Assistant responds to the User in JSON only, according to the following JSON schema: {"type":"object","properties":{"nodes":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string"},"type":{"type":"string"},"detailed_type":{"type":"string"}},"required":["id","type","detailed_type"],"additionalProperties":false}},"edges":{"type":"array","items":{"type":"object","properties":{"from":{"type":"string"},"to":{"type":"string"},"label":{"type":"string"}},"required":["from","to","label"],"additionalProperties":false}}},"required":["nodes","edges"],"additionalProperties":false} """}, {"role": "user", "content": """ -------Text begin------- OpenAI is an American artificial intelligence (AI) research organization founded in December 2015 and headquartered in San Francisco, California. Its mission is to develop "safe and beneficial" artificial general intelligence, which it defines as "highly autonomous systems that outperform humans at most economically valuable work".[4] As a leading organization in the ongoing AI boom,[5] OpenAI is known for the GPT family of large language models, the DALL-E series of text-to-image models, and a text-to-video model named Sora.[6][7] Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI. -------Text end------- """} ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) # Output: # { # "nodes": [ # { # "id": "OpenAI", # "type": "organization", # "detailed_type": "ai research organization" # }, # { # "id": "GPT family", # "type": "technology", # "detailed_type": "large language models" # }, # { # "id": "DALL-E series", # "type": "technology", # "detailed_type": "text-to-image models" # }, # { # "id": "Sora", # "type": "technology", # "detailed_type": "text-to-video model" # }, # { # "id": "ChatGPT", # "type": "technology", # "detailed_type": "generative ai" # }, # { # "id": "San Francisco", # "type": "location", # "detailed_type": "city" # }, # { # "id": "California", # "type": "location", # "detailed_type": "state" # }, # { # "id": "December 2015", # "type": "date", # "detailed_type": "foundation date" # }, # { # "id": "November 2022", # "type": "date", # "detailed_type": "release date" # } # ], # "edges": [ # { # "from": "OpenAI", # "to": "San Francisco", # "label": "headquartered in" # }, # { # "from": "San Francisco", # "to": "California", # "label": "located in" # }, # { # "from": "OpenAI", # "to": "December 2015", # "label": "founded in" # }, # { # "from": "OpenAI", # "to": "GPT family", # "label": "developed" # }, # { # "from": "OpenAI", # "to": "DALL-E series", # "label": "developed" # }, # { # "from": "OpenAI", # "to": "Sora", # "label": "developed" # }, # { # "from": "OpenAI", # "to": "ChatGPT", # "label": "released" # }, # { # "from": "ChatGPT", # "to": "November 2022", # "label": "released in" # } # ] # } ``` ## Ethical Considerations Users should be aware that this model is designed for entity relationship extraction from general text data and may not be suitable for other domains without further fine-tuning. The model's outputs should be reviewed and validated, especially when used in applications that may impact decision-making or public information dissemination.
{}
task
[ "TRANSLATION" ]
46,048
Ahmedhany216/Fake-news-gen
Ahmedhany216
text2text-generation
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/flan-t5-base", "base_model:finetune:google/flan-t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-01-07T09:44:14Z
2024-01-07T21:33:41+00:00
11
0
--- base_model: google/flan-t5-base license: apache-2.0 tags: - generated_from_trainer model-index: - name: Fake-news-gen results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Model Card: Fake-news-generator ## Model Purpose This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the XSUM BBC news dataset. Its primary purpose is to serve as a tool for research, education, and testing in the domain of AI-generated fake news. ## Summary The model is a conditional text generation system specifically fine-tuned to create artificially generated news articles based on short text summaries. This demonstration aims to showcase the capabilities and potential risks associated with AI systems automatically synthesizing false or misleading news content from limited input information. ## Intended Uses 1. **Research on AI Fake News Generation:** - Understanding the capabilities and limitations of AI models in generating deceptive content. - Exploring potential mitigation strategies and ethical considerations. 2. **Educational Purposes:** - Increasing awareness of the challenges posed by AI-generated fake content. - Promoting responsible AI development and usage. 3. **Testing Fake News Detection Systems:** - Evaluating the effectiveness of automatic fake news detection systems against AI-generated content. ## Factors - **Training Data:** - Initially trained on XSUM BBC news summarization data. - Fine-tuned end-to-end to generate full articles from short text summaries. - **Generation Process:** - Content is generated token-by-token based on the provided summary prompt. - No ground-truth real/fake labels or classifier included in the training data. - **Output Characteristics:** - Outputs are raw model decodes without post-processing. ## Caveats and Recommendations - **Not Intended for Malicious Uses:** - This model is explicitly not intended for creating or disseminating malicious or harmful content. - **Ethical Considerations:** - Users are strongly advised to exercise caution and ethical responsibility when using or sharing outputs from this model. - **Limitation on Real/Fake Labels:** - The model lacks ground-truth labels for distinguishing between real and fake news. - **Limited Post-Processing:** - Generated outputs are presented without additional post-processing to emphasize raw model capabilities. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.36.0 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Model Card: Fake-news-generator ## Model Purpose This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the XSUM BBC news dataset. Its primary purpose is to serve as a tool for research, education, and testing in the domain of AI-generated fake news. ## Summary The model is a conditional text generation system specifically fine-tuned to create artificially generated news articles based on short text summaries. This demonstration aims to showcase the capabilities and potential risks associated with AI systems automatically synthesizing false or misleading news content from limited input information. ## Intended Uses 1. **Research on AI Fake News Generation:** - Understanding the capabilities and limitations of AI models in generating deceptive content. - Exploring potential mitigation strategies and ethical considerations. 2. **Educational Purposes:** - Increasing awareness of the challenges posed by AI-generated fake content. - Promoting responsible AI development and usage. 3. **Testing Fake News Detection Systems:** - Evaluating the effectiveness of automatic fake news detection systems against AI-generated content. ## Factors - **Training Data:** - Initially trained on XSUM BBC news summarization data. - Fine-tuned end-to-end to generate full articles from short text summaries. - **Generation Process:** - Content is generated token-by-token based on the provided summary prompt. - No ground-truth real/fake labels or classifier included in the training data. - **Output Characteristics:** - Outputs are raw model decodes without post-processing. ## Caveats and Recommendations - **Not Intended for Malicious Uses:** - This model is explicitly not intended for creating or disseminating malicious or harmful content. - **Ethical Considerations:** - Users are strongly advised to exercise caution and ethical responsibility when using or sharing outputs from this model. - **Limitation on Real/Fake Labels:** - The model lacks ground-truth labels for distinguishing between real and fake news. - **Limited Post-Processing:** - Generated outputs are presented without additional post-processing to emphasize raw model capabilities. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.36.0 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
{"base_model": "google/flan-t5-base", "license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "Fake-news-gen", "results": []}]}
task
[ "SUMMARIZATION" ]
46,049
UGARIT/grc-ner-xlmr
UGARIT
token-classification
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "grc", "base_model:UGARIT/grc-alignment", "base_model:finetune:UGARIT/grc-alignment", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-03-31T21:52:11Z
2024-05-23T21:03:04+00:00
66
1
--- base_model: - UGARIT/grc-alignment language: - grc license: mit tags: - token-classification inference: parameters: aggregation_strategy: first widget: - text: ταῦτα εἴπας ὁ Ἀλέξανδρος παρίζει Πέρσῃ ἀνδρὶ ἄνδρα Μακεδόνα ὡς γυναῖκα τῷ λόγῳ · οἳ δέ , ἐπείτε σφέων οἱ Πέρσαι ψαύειν ἐπειρῶντο , διεργάζοντο αὐτούς . example_title: Example 1 --- # Named Entity Recognition for Ancient Greek Pretrained NER tagging model for ancient Greek # Data We trained the models on available annotated corpora in Ancient Greek. There are only two sizeable annotated datasets in Ancient Greek, which are currently un- der release: the first one by Berti 2023, consists of a fully annotated text of Athenaeus’ Deipnosophists, developed in the context of the Digital Athenaeus project. The second one by Foka et al. 2020, is a fully annotated text of Pausanias’ Periegesis Hellados, developed in the context of the Digital Periegesis project. In addition, we used smaller corpora annotated by students and scholars on Recogito: the Odyssey annotated by Kemp 2021; a mixed corpus including excerpts from the Library attributed to Apollodorus and from Strabo’s Geography, annotated by Chiara Palladino; Book 1 of Xenophon’s Anabasis, created by Thomas Visser; and Demos- thenes’ Against Neaira, created by Rachel Milio. ### Training Dataset | | **Person** | **Location** | **NORP** | **MISC** | |----------------|------------------|-------------------|-------------------|-------------------| | Odyssey | 2.469 | 698 | 0 | 0 | | Deipnosophists | 14.921 | 2.699 | 5.110 | 3.060 | | Pausanias | 10.205 | 8.670 | 4.972 | 0 | | Other Datasets | 3.283 | 2.040 | 1.089 | 0 | | **Total** | **30.878** | **14.107** | **11.171** | **3.060** | --- ### Validation Dataset | | **Person** | **Location** | **NORP** | **MISC** | |----------------|------------------|-------------------|-------------------|-------------------| | Xenophon | 1.190 | 796 | 857 | 0 | # Results | Class | Metric | Test | Validation | |---------|-----------|--------|--------| | **LOC** | precision | 83.33% | 88.66% | | | recall | 81.27% | 88.94% | | | f1 | 82.29% | 88.80% | | **MISC** | precision | 83.25% | 0 | | | recall | 81.21% | 0 | | | f1 | 82.22% | 0 | | **NORP** | precision | 88.71% | 94.76% | | | recall | 90.76% | 94.50% | | | f1 | 89.73% | 94.63% | | **PER** | precision | 91.72% | 94.22% | | | recall | 94.42% | 96.06% | | | f1 | 93.05% | 95.13% | | **Overall** | precision | 88.83% | 92.91% | | | recall | 89.99% | 93.72% | | | f1 | 89.41% | 93.32% | | | Accuracy | 97.50% | 98.87% | # Usage This [colab notebook](https://colab.research.google.com/drive/1K6ER_C8d_AxBm0Yrtr628P3weH1Rxhht?usp=sharing) contains the necessary code to use the model. ```python from transformers import pipeline # create pipeline for NER ner = pipeline('ner', model="UGARIT/grc-ner-xlmr", aggregation_strategy = 'first') ner("ταῦτα εἴπας ὁ Ἀλέξανδρος παρίζει Πέρσῃ ἀνδρὶ ἄνδρα Μακεδόνα ὡς γυναῖκα τῷ λόγῳ · οἳ δέ , ἐπείτε σφέων οἱ Πέρσαι ψαύειν ἐπειρῶντο , διεργάζοντο αὐτούς .") ``` Output ``` [{'entity_group': 'PER', 'score': 0.9999428, 'word': '', 'start': 13, 'end': 14}, {'entity_group': 'PER', 'score': 0.99994195, 'word': 'Ἀλέξανδρος', 'start': 14, 'end': 24}, {'entity_group': 'NORP', 'score': 0.9087087, 'word': 'Πέρσῃ', 'start': 32, 'end': 38}, {'entity_group': 'NORP', 'score': 0.97572577, 'word': 'Μακεδόνα', 'start': 50, 'end': 59}, {'entity_group': 'NORP', 'score': 0.9993412, 'word': 'Πέρσαι', 'start': 104, 'end': 111}] ``` # Citation: ``` @inproceedings{palladino-yousef-2024-development, title = "Development of Robust {NER} Models and Named Entity Tagsets for {A}ncient {G}reek", author = "Palladino, Chiara and Yousef, Tariq", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.11", pages = "89--97", abstract = "This contribution presents a novel approach to the development and evaluation of transformer-based models for Named Entity Recognition and Classification in Ancient Greek texts. We trained two models with annotated datasets by consolidating potentially ambiguous entity types under a harmonized set of classes. Then, we tested their performance with out-of-domain texts, reproducing a real-world use case. Both models performed very well under these conditions, with the multilingual model being slightly superior on the monolingual one. In the conclusion, we emphasize current limitations due to the scarcity of high-quality annotated corpora and to the lack of cohesive annotation strategies for ancient languages.", } ```
null
Non_BioNLP
# Named Entity Recognition for Ancient Greek Pretrained NER tagging model for ancient Greek # Data We trained the models on available annotated corpora in Ancient Greek. There are only two sizeable annotated datasets in Ancient Greek, which are currently un- der release: the first one by Berti 2023, consists of a fully annotated text of Athenaeus’ Deipnosophists, developed in the context of the Digital Athenaeus project. The second one by Foka et al. 2020, is a fully annotated text of Pausanias’ Periegesis Hellados, developed in the context of the Digital Periegesis project. In addition, we used smaller corpora annotated by students and scholars on Recogito: the Odyssey annotated by Kemp 2021; a mixed corpus including excerpts from the Library attributed to Apollodorus and from Strabo’s Geography, annotated by Chiara Palladino; Book 1 of Xenophon’s Anabasis, created by Thomas Visser; and Demos- thenes’ Against Neaira, created by Rachel Milio. ### Training Dataset | | **Person** | **Location** | **NORP** | **MISC** | |----------------|------------------|-------------------|-------------------|-------------------| | Odyssey | 2.469 | 698 | 0 | 0 | | Deipnosophists | 14.921 | 2.699 | 5.110 | 3.060 | | Pausanias | 10.205 | 8.670 | 4.972 | 0 | | Other Datasets | 3.283 | 2.040 | 1.089 | 0 | | **Total** | **30.878** | **14.107** | **11.171** | **3.060** | --- ### Validation Dataset | | **Person** | **Location** | **NORP** | **MISC** | |----------------|------------------|-------------------|-------------------|-------------------| | Xenophon | 1.190 | 796 | 857 | 0 | # Results | Class | Metric | Test | Validation | |---------|-----------|--------|--------| | **LOC** | precision | 83.33% | 88.66% | | | recall | 81.27% | 88.94% | | | f1 | 82.29% | 88.80% | | **MISC** | precision | 83.25% | 0 | | | recall | 81.21% | 0 | | | f1 | 82.22% | 0 | | **NORP** | precision | 88.71% | 94.76% | | | recall | 90.76% | 94.50% | | | f1 | 89.73% | 94.63% | | **PER** | precision | 91.72% | 94.22% | | | recall | 94.42% | 96.06% | | | f1 | 93.05% | 95.13% | | **Overall** | precision | 88.83% | 92.91% | | | recall | 89.99% | 93.72% | | | f1 | 89.41% | 93.32% | | | Accuracy | 97.50% | 98.87% | # Usage This [colab notebook](https://colab.research.google.com/drive/1K6ER_C8d_AxBm0Yrtr628P3weH1Rxhht?usp=sharing) contains the necessary code to use the model. ```python from transformers import pipeline # create pipeline for NER ner = pipeline('ner', model="UGARIT/grc-ner-xlmr", aggregation_strategy = 'first') ner("ταῦτα εἴπας ὁ Ἀλέξανδρος παρίζει Πέρσῃ ἀνδρὶ ἄνδρα Μακεδόνα ὡς γυναῖκα τῷ λόγῳ · οἳ δέ , ἐπείτε σφέων οἱ Πέρσαι ψαύειν ἐπειρῶντο , διεργάζοντο αὐτούς .") ``` Output ``` [{'entity_group': 'PER', 'score': 0.9999428, 'word': '', 'start': 13, 'end': 14}, {'entity_group': 'PER', 'score': 0.99994195, 'word': 'Ἀλέξανδρος', 'start': 14, 'end': 24}, {'entity_group': 'NORP', 'score': 0.9087087, 'word': 'Πέρσῃ', 'start': 32, 'end': 38}, {'entity_group': 'NORP', 'score': 0.97572577, 'word': 'Μακεδόνα', 'start': 50, 'end': 59}, {'entity_group': 'NORP', 'score': 0.9993412, 'word': 'Πέρσαι', 'start': 104, 'end': 111}] ``` # Citation: ``` @inproceedings{palladino-yousef-2024-development, title = "Development of Robust {NER} Models and Named Entity Tagsets for {A}ncient {G}reek", author = "Palladino, Chiara and Yousef, Tariq", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.11", pages = "89--97", abstract = "This contribution presents a novel approach to the development and evaluation of transformer-based models for Named Entity Recognition and Classification in Ancient Greek texts. We trained two models with annotated datasets by consolidating potentially ambiguous entity types under a harmonized set of classes. Then, we tested their performance with out-of-domain texts, reproducing a real-world use case. Both models performed very well under these conditions, with the multilingual model being slightly superior on the monolingual one. In the conclusion, we emphasize current limitations due to the scarcity of high-quality annotated corpora and to the lack of cohesive annotation strategies for ancient languages.", } ```
{"base_model": ["UGARIT/grc-alignment"], "language": ["grc"], "license": "mit", "tags": ["token-classification"], "inference": {"parameters": {"aggregation_strategy": "first"}}, "widget": [{"text": "ταῦτα εἴπας ὁ Ἀλέξανδρος παρίζει Πέρσῃ ἀνδρὶ ἄνδρα Μακεδόνα ὡς γυναῖκα τῷ λόγῳ · οἳ δέ , ἐπείτε σφέων οἱ Πέρσαι ψαύειν ἐπειρῶντο , διεργάζοντο αὐτούς .", "example_title": "Example 1"}]}
task
[ "NAMED_ENTITY_RECOGNITION" ]
46,050
manojpreveen/gpt-neoxt-20b-v6
manojpreveen
text-generation
[ "transformers", "pytorch", "gpt_neox", "text-generation", "dataset:iamplus/Instruction_Tuning", "license:bigscience-openrail-m", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-02-29T11:38:08Z
2024-02-29T11:38:08+00:00
17
0
--- datasets: - iamplus/Instruction_Tuning license: bigscience-openrail-m --- Instruction Tuned GPT-NeoXT-20B model on Instruction Tuning dataset as listed below (~560k data) using ***Colossal AI*** **Base Model:** togethercomputer/GPT-NeoXT-Chat-Base-20B (GPT-NeoXT-Chat-Base-20B-v0.16 - fine-tuned on feedback data) **Training Details :** * Epochs: 5 * Batch Size : 16 instantaneous per device x 1 gradient accumulation steps x 8 gpus = 128 * Max Length : 1024 * Weight Decay : 0 * Learning Rate : 2e-5 * Learning Rate Scheduler Type : Cosine * Number of warmup steps : 240 * Machine : 8xA100 80GB **Dataset Details :** Dataset : iamplus/Instruction_Tuning Files : * stanford_alpaca_it_v2.csv * ColossalChat.csv * unified_chip2.csv * iamai_summarization_v1.csv * iamai_v1.csv
null
Non_BioNLP
Instruction Tuned GPT-NeoXT-20B model on Instruction Tuning dataset as listed below (~560k data) using ***Colossal AI*** **Base Model:** togethercomputer/GPT-NeoXT-Chat-Base-20B (GPT-NeoXT-Chat-Base-20B-v0.16 - fine-tuned on feedback data) **Training Details :** * Epochs: 5 * Batch Size : 16 instantaneous per device x 1 gradient accumulation steps x 8 gpus = 128 * Max Length : 1024 * Weight Decay : 0 * Learning Rate : 2e-5 * Learning Rate Scheduler Type : Cosine * Number of warmup steps : 240 * Machine : 8xA100 80GB **Dataset Details :** Dataset : iamplus/Instruction_Tuning Files : * stanford_alpaca_it_v2.csv * ColossalChat.csv * unified_chip2.csv * iamai_summarization_v1.csv * iamai_v1.csv
{"datasets": ["iamplus/Instruction_Tuning"], "license": "bigscience-openrail-m"}
task
[ "SUMMARIZATION" ]
46,051
PIXMELT/all-MiniLM-L6-v2
PIXMELT
sentence-similarity
[ "sentence-transformers", "pytorch", "tf", "rust", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "en", "dataset:s2orc", "dataset:flax-sentence-embeddings/stackexchange_xml", "dataset:ms_marco", "dataset:gooaq", "dataset:yahoo_answers_topics", "dataset:code_search_net", "dataset:search_qa", "dataset:eli5", "dataset:snli", "dataset:multi_nli", "dataset:wikihow", "dataset:natural_questions", "dataset:trivia_qa", "dataset:embedding-data/sentence-compression", "dataset:embedding-data/flickr30k-captions", "dataset:embedding-data/altlex", "dataset:embedding-data/simple-wiki", "dataset:embedding-data/QQP", "dataset:embedding-data/SPECTER", "dataset:embedding-data/PAQ_pairs", "dataset:embedding-data/WikiAnswers", "arxiv:1904.06472", "arxiv:2102.07033", "arxiv:2104.08727", "arxiv:1704.05179", "arxiv:1810.09305", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-04-25T03:24:27Z
2024-04-25T03:26:13+00:00
14
0
--- datasets: - s2orc - flax-sentence-embeddings/stackexchange_xml - ms_marco - gooaq - yahoo_answers_topics - code_search_net - search_qa - eli5 - snli - multi_nli - wikihow - natural_questions - trivia_qa - embedding-data/sentence-compression - embedding-data/flickr30k-captions - embedding-data/altlex - embedding-data/simple-wiki - embedding-data/QQP - embedding-data/SPECTER - embedding-data/PAQ_pairs - embedding-data/WikiAnswers language: en library_name: sentence-transformers license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2) ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developed this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developed this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intended to be used as a sentence and short paragraph encoder. Given an input text, it outputs a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 256 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained our model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,170,060,424** |
null
Non_BioNLP
# all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2) ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developed this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developed this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intended to be used as a sentence and short paragraph encoder. Given an input text, it outputs a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 256 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained our model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,170,060,424** |
{"datasets": ["s2orc", "flax-sentence-embeddings/stackexchange_xml", "ms_marco", "gooaq", "yahoo_answers_topics", "code_search_net", "search_qa", "eli5", "snli", "multi_nli", "wikihow", "natural_questions", "trivia_qa", "embedding-data/sentence-compression", "embedding-data/flickr30k-captions", "embedding-data/altlex", "embedding-data/simple-wiki", "embedding-data/QQP", "embedding-data/SPECTER", "embedding-data/PAQ_pairs", "embedding-data/WikiAnswers"], "language": "en", "library_name": "sentence-transformers", "license": "apache-2.0", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"]}
task
[ "QUESTION_ANSWERING" ]
46,052
MikeRoz/c4ai-command-r-plus-08-2024-3.0bpw-h8-exl2
MikeRoz
text-generation
[ "transformers", "safetensors", "cohere", "text-generation", "conversational", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "3-bit", "exl2", "region:us" ]
2024-08-31T09:27:06Z
2024-09-01T00:42:56+00:00
12
0
--- language: - en - fr - de - es - it - pt - ja - ko - zh - ar library_name: transformers license: cc-by-nc-4.0 inference: false extra_gated_prompt: By submitting this form, you agree to the [License Agreement](https://cohere.com/c4ai-cc-by-nc-license) and acknowledge that the information you provide will be collected, used, and shared in accordance with Cohere’s [Privacy Policy]( https://cohere.com/privacy). extra_gated_fields: Name: text Affiliation: text Country: type: select options: - Aruba - Afghanistan - Angola - Anguilla - Åland Islands - Albania - Andorra - United Arab Emirates - Argentina - Armenia - American Samoa - Antarctica - French Southern Territories - Antigua and Barbuda - Australia - Austria - Azerbaijan - Burundi - Belgium - Benin - Bonaire Sint Eustatius and Saba - Burkina Faso - Bangladesh - Bulgaria - Bahrain - Bahamas - Bosnia and Herzegovina - Saint Barthélemy - Belarus - Belize - Bermuda - Plurinational State of Bolivia - Brazil - Barbados - Brunei-Darussalam - Bhutan - Bouvet-Island - Botswana - Central African Republic - Canada - Cocos (Keeling) Islands - Switzerland - Chile - China - Côte-dIvoire - Cameroon - Democratic Republic of the Congo - Cook Islands - Colombia - Comoros - Cabo Verde - Costa Rica - Cuba - Curaçao - Christmas Island - Cayman Islands - Cyprus - Czechia - Germany - Djibouti - Dominica - Denmark - Dominican Republic - Algeria - Ecuador - Egypt - Eritrea - Western Sahara - Spain - Estonia - Ethiopia - Finland - Fiji - Falkland Islands (Malvinas) - France - Faroe Islands - Federated States of Micronesia - Gabon - United Kingdom - Georgia - Guernsey - Ghana - Gibraltar - Guinea - Guadeloupe - Gambia - Guinea Bissau - Equatorial Guinea - Greece - Grenada - Greenland - Guatemala - French Guiana - Guam - Guyana - Hong Kong - Heard Island and McDonald Islands - Honduras - Croatia - Haiti - Hungary - Indonesia - Isle of Man - India - British Indian Ocean Territory - Ireland - Islamic Republic of Iran - Iraq - Iceland - Israel - Italy - Jamaica - Jersey - Jordan - Japan - Kazakhstan - Kenya - Kyrgyzstan - Cambodia - Kiribati - Saint-Kitts-and-Nevis - South Korea - Kuwait - Lao-Peoples-Democratic-Republic - Lebanon - Liberia - Libya - Saint-Lucia - Liechtenstein - Sri Lanka - Lesotho - Lithuania - Luxembourg - Latvia - Macao - Saint Martin (French-part) - Morocco - Monaco - Republic of Moldova - Madagascar - Maldives - Mexico - Marshall Islands - North Macedonia - Mali - Malta - Myanmar - Montenegro - Mongolia - Northern Mariana Islands - Mozambique - Mauritania - Montserrat - Martinique - Mauritius - Malawi - Malaysia - Mayotte - Namibia - New Caledonia - Niger - Norfolk Island - Nigeria - Nicaragua - Niue - Netherlands - Norway - Nepal - Nauru - New Zealand - Oman - Pakistan - Panama - Pitcairn - Peru - Philippines - Palau - Papua New Guinea - Poland - Puerto Rico - North Korea - Portugal - Paraguay - State of Palestine - French Polynesia - Qatar - Réunion - Romania - Russia - Rwanda - Saudi Arabia - Sudan - Senegal - Singapore - South Georgia and the South Sandwich Islands - Saint Helena Ascension and Tristan da Cunha - Svalbard and Jan Mayen - Solomon Islands - Sierra Leone - El Salvador - San Marino - Somalia - Saint Pierre and Miquelon - Serbia - South Sudan - Sao Tome and Principe - Suriname - Slovakia - Slovenia - Sweden - Eswatini - Sint Maarten (Dutch-part) - Seychelles - Syrian Arab Republic - Turks and Caicos Islands - Chad - Togo - Thailand - Tajikistan - Tokelau - Turkmenistan - Timor Leste - Tonga - Trinidad and Tobago - Tunisia - Turkey - Tuvalu - Taiwan - United Republic of Tanzania - Uganda - Ukraine - United States Minor Outlying Islands - Uruguay - United-States - Uzbekistan - Holy See (Vatican City State) - Saint Vincent and the Grenadines - Bolivarian Republic of Venezuela - Virgin Islands British - Virgin Islands U.S. - VietNam - Vanuatu - Wallis and Futuna - Samoa - Yemen - South Africa - Zambia - Zimbabwe Receive email updates on C4AI and Cohere research, events, products and services?: type: select options: - true - false I agree to use this model for non-commercial use ONLY: checkbox --- # Model Card for C4AI Command R+ 08-2024 ## Model Summary C4AI Command R+ 08-2024 is an open weights research release of a 104B billion parameter model with highly advanced capabilities, this includes Retrieval Augmented Generation (RAG) and tool use to automate sophisticated tasks. The tool use in this model generation enables multi-step tool use which allows the model to combine multiple tools over multiple steps to accomplish difficult tasks. C4AI Command R+ 08-2024 is a multilingual model trained on 23 languages and evaluated in 10 languages. Command R+ 08-2024 is optimized for a variety of use cases including reasoning, summarization, and question answering. C4AI Command R+ 08-2024 is part of a family of open weight releases from Cohere For AI and Cohere. Our smaller companion model is [C4AI Command R 08-2024](https://huggingface.co/CohereForAI/c4ai-command-r-08-2024). - Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/) - License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy) - Model: c4ai-command-r-plus-08-2024 - Model Size: 104 billion parameters - Context length: 128K **Try C4AI Command R+** You can try out C4AI Command R+ before downloading the weights in our hosted [Hugging Face Space](https://huggingface.co/spaces/CohereForAI/c4ai-command?model=command-r-plus-08-2024). **Usage** Please use `transformers` version 4.39.1 or higher ```python # pip install 'transformers>=4.39.1' from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "CohereForAI/c4ai-command-r-plus-08-2024" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) # Format message with the command-r-plus-08-2024 chat template messages = [{"role": "user", "content": "Hello, how are you?"}] input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") ## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> gen_tokens = model.generate( input_ids, max_new_tokens=100, do_sample=True, temperature=0.3, ) gen_text = tokenizer.decode(gen_tokens[0]) print(gen_text) ``` ## Model Details **Input**: Models input text only. **Output**: Models generate text only. **Model Architecture**: This is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety. We use grouped query attention (GQA) to improve inference speed. **Languages covered**: The model has been trained on 23 languages (English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, Simplified Chinese, Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, and Persian) and evaluated on 10 languages (English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, Simplified Chinese). **Context length**: Command R+ 08-2024 supports a context length of 128K. ### Tool use & Agent capabilities: Command R+ 08-2024 has been specifically trained with conversational tool use capabilities. These have been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template will likely reduce performance, but we encourage experimentation. Command R+ 08-2024’s tool use functionality takes a conversation as input (with an optional user-system preamble), along with a list of available tools. The model will then generate a json-formatted list of actions to execute on a subset of those tools. Command R+ 08-2024 may use one of its supplied tools more than once. The model has been trained to recognise a special `directly_answer` tool, which it uses to indicate that it doesn’t want to use any of its other tools. The ability to abstain from calling a specific tool can be useful in a range of situations, such as greeting a user, or asking clarifying questions. We recommend including the `directly_answer` tool, but it can be removed or renamed if required. Comprehensive documentation for working with Command R+ 08-2024's tool use prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r). Command R+ 08-2024 also supports Hugging Face's [tool use API](https://huggingface.co/docs/transformers/main/en/chat_templating#advanced-tool-use--function-calling). The code snippets below show minimal working examples on how to render a prompt. <details> <summary><b>Usage: Rendering Tool Use Prompts [CLICK TO EXPAND]</b> </summary> ```python from transformers import AutoTokenizer model_id = "CohereForAI/c4ai-command-r-plus-08-2024" tokenizer = AutoTokenizer.from_pretrained(model_id) # define conversation input: conversation = [ {"role": "user", "content": "Whats the biggest penguin in the world?"} ] # Define tools available for the model to use: tools = [ { "name": "internet_search", "description": "Returns a list of relevant document snippets for a textual query retrieved from the internet", "parameter_definitions": { "query": { "description": "Query to search the internet with", "type": 'str', "required": True } } }, { 'name': "directly_answer", "description": "Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history", 'parameter_definitions': {} } ] # render the tool use prompt as a string: tool_use_prompt = tokenizer.apply_tool_use_template( conversation, tools=tools, tokenize=False, add_generation_prompt=True, ) print(tool_use_prompt) ``` </details> <details> <summary><b>Usage: Rendering prompts with the Tool Use API [CLICK TO EXPAND]</b> </summary> ```python from transformers import AutoTokenizer model_id = "CohereForAI/c4ai-command-r-plus-08-2024" tokenizer = AutoTokenizer.from_pretrained(model_id) # define conversation input: conversation = [ {"role": "user", "content": "Whats the biggest penguin in the world?"} ] # Define tools available for the model to use # Type hints and docstrings from Python functions are automatically extracted def internet_search(query: str): """ Returns a list of relevant document snippets for a textual query retrieved from the internet Args: query: Query to search the internet with """ pass def directly_answer(): """ Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history """ pass tools = [internet_search, directly_answer] # render the tool use prompt as a string: tool_use_prompt = tokenizer.apply_chat_template( conversation, tools=tools, tokenize=False, add_generation_prompt=True, ) print(tool_use_prompt) ``` </details> <details> <summary><b>Example Rendered Tool Use Prompt [CLICK TO EXPAND]</b></summary> ```` <BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral. # System Preamble ## Basic Rules You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions. # User Preamble ## Task and Context You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging. ## Style Guide Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling. ## Available Tools Here is a list of tools that you have available to you: ```python def internet_search(query: str) -> List[Dict]: """Returns a list of relevant document snippets for a textual query retrieved from the internet Args: query (str): Query to search the internet with """ pass ``` ```python def directly_answer() -> List[Dict]: """Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history """ pass ```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Write 'Action:' followed by a json-formatted list of actions that you want to perform in order to produce a good response to the user's last input. You can use any of the supplied tools any number of times, but you should aim to execute the minimum number of necessary actions for the input. You should use the `directly-answer` tool if calling the other tools is unnecessary. The list of actions you want to call should be formatted as a list of json objects, for example: ```json [ { "tool_name": title of the tool in the specification, "parameters": a dict of parameters to input into the tool as they are defined in the specs, or {} if it takes no parameters } ]```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> ```` </details> <details> <summary><b>Example Rendered Tool Use Completion [CLICK TO EXPAND]</b></summary> ```` Action: ```json [ { "tool_name": "internet_search", "parameters": { "query": "biggest penguin in the world" } } ] ``` ```` </details> ### Grounded Generation and RAG Capabilities: Command R+ 08-2024 has been specifically trained with grounded generation capabilities. This means that it can generate responses based on a list of supplied document snippets, and it will include grounding spans (citations) in its response indicating the source of the information. This can be used to enable behaviors such as grounded summarization and the final step of Retrieval Augmented Generation (RAG). This behavior has been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template may reduce performance, but we encourage experimentation. Command R+ 08-2024’s grounded generation behavior takes a conversation as input (with an optional user-supplied system preamble, indicating task, context and desired output style), along with a list of retrieved document snippets. The document snippets should be chunks, rather than long documents, typically around 100-400 words per chunk. Document snippets consist of key-value pairs. The keys should be short descriptive strings, the values can be text or semi-structured. By default, Command R+ 08-2024 will generate grounded responses by first predicting which documents are relevant, then predicting which ones it will cite, then generating an answer. Finally, it will then insert grounding spans into the answer. See below for an example. This is referred to as `accurate` grounded generation. The model is trained with a number of other answering modes, which can be selected by prompt changes. A `fast` citation mode is supported in the tokenizer, which will directly generate an answer with grounding spans in it, without first writing the answer out in full. This sacrifices some grounding accuracy in favor of generating fewer tokens. Comprehensive documentation for working with Command R+ 08-2024's grounded generation prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r). The code snippet below shows a minimal working example on how to render a prompt. <details> <summary> <b>Usage: Rendering Grounded Generation prompts [CLICK TO EXPAND]</b> </summary> ````python from transformers import AutoTokenizer model_id = "CohereForAI/c4ai-command-r-plus-08-2024" tokenizer = AutoTokenizer.from_pretrained(model_id) # define conversation input: conversation = [ {"role": "user", "content": "Whats the biggest penguin in the world?"} ] # define documents to ground on: documents = [ { "title": "Tall penguins", "text": "Emperor penguins are the tallest growing up to 122 cm in height." }, { "title": "Penguin habitats", "text": "Emperor penguins only live in Antarctica."} ] # render the tool use prompt as a string: grounded_generation_prompt = tokenizer.apply_grounded_generation_template( conversation, documents=documents, citation_mode="accurate", # or "fast" tokenize=False, add_generation_prompt=True, ) print(grounded_generation_prompt) ```` </details> <details> <summary><b>Example Rendered Grounded Generation Prompt [CLICK TO EXPAND]</b></summary> ```` <BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral. # System Preamble ## Basic Rules You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions. # User Preamble ## Task and Context You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging. ## Style Guide Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|><results> Document: 0 title: Tall penguins text: Emperor penguins are the tallest growing up to 122 cm in height. Document: 1 title: Penguin habitats text: Emperor penguins only live in Antarctica. </results><|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Carefully perform the following instructions, in order, starting each with a new line. Firstly, Decide which of the retrieved documents are relevant to the user's last input by writing 'Relevant Documents:' followed by comma-separated list of document numbers. If none are relevant, you should instead write 'None'. Secondly, Decide which of the retrieved documents contain facts that should be cited in a good answer to the user's last input by writing 'Cited Documents:' followed a comma-separated list of document numbers. If you dont want to cite any of them, you should instead write 'None'. Thirdly, Write 'Answer:' followed by a response to the user's last input in high quality natural english. Use the retrieved documents to help you. Do not insert any citations or grounding markup. Finally, Write 'Grounded answer:' followed by a response to the user's last input in high quality natural english. Use the symbols <co: doc> and </co: doc> to indicate when a fact comes from a document in the search result, e.g <co: 0>my fact</co: 0> for a fact from document 0.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> ```` </details> <details> <summary><b>Example Rendered Grounded Generation Completion [CLICK TO EXPAND]</b></summary> ```` Relevant Documents: 0,1 Cited Documents: 0,1 Answer: The Emperor Penguin is the tallest or biggest penguin in the world. It is a bird that lives only in Antarctica and grows to a height of around 122 centimetres. Grounded answer: The <co: 0>Emperor Penguin</co: 0> is the <co: 0>tallest</co: 0> or biggest penguin in the world. It is a bird that <co: 1>lives only in Antarctica</co: 1> and <co: 0>grows to a height of around 122 centimetres.</co: 0> ```` </details> ### Code Capabilities: Command R+ 08-2024 has been optimized to interact with your code, by requesting code snippets, code explanations, or code rewrites. It might not perform well out-of-the-box for pure code completion. For better performance, we also recommend using a low temperature (and even greedy decoding) for code-generation related instructions. ### Model Card Contact For errors or additional questions about details in this model card, contact [[email protected]](mailto:[email protected]). ### Terms of Use: We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant 104 billion parameter model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy). ### Try Chat: You can try Command R+ 08-2024 chat in the playground [here](https://dashboard.cohere.com/playground/chat). You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/c4ai-command?model=command-r-plus-08-2024).
null
Non_BioNLP
# Model Card for C4AI Command R+ 08-2024 ## Model Summary C4AI Command R+ 08-2024 is an open weights research release of a 104B billion parameter model with highly advanced capabilities, this includes Retrieval Augmented Generation (RAG) and tool use to automate sophisticated tasks. The tool use in this model generation enables multi-step tool use which allows the model to combine multiple tools over multiple steps to accomplish difficult tasks. C4AI Command R+ 08-2024 is a multilingual model trained on 23 languages and evaluated in 10 languages. Command R+ 08-2024 is optimized for a variety of use cases including reasoning, summarization, and question answering. C4AI Command R+ 08-2024 is part of a family of open weight releases from Cohere For AI and Cohere. Our smaller companion model is [C4AI Command R 08-2024](https://huggingface.co/CohereForAI/c4ai-command-r-08-2024). - Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/) - License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy) - Model: c4ai-command-r-plus-08-2024 - Model Size: 104 billion parameters - Context length: 128K **Try C4AI Command R+** You can try out C4AI Command R+ before downloading the weights in our hosted [Hugging Face Space](https://huggingface.co/spaces/CohereForAI/c4ai-command?model=command-r-plus-08-2024). **Usage** Please use `transformers` version 4.39.1 or higher ```python # pip install 'transformers>=4.39.1' from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "CohereForAI/c4ai-command-r-plus-08-2024" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) # Format message with the command-r-plus-08-2024 chat template messages = [{"role": "user", "content": "Hello, how are you?"}] input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") ## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> gen_tokens = model.generate( input_ids, max_new_tokens=100, do_sample=True, temperature=0.3, ) gen_text = tokenizer.decode(gen_tokens[0]) print(gen_text) ``` ## Model Details **Input**: Models input text only. **Output**: Models generate text only. **Model Architecture**: This is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety. We use grouped query attention (GQA) to improve inference speed. **Languages covered**: The model has been trained on 23 languages (English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, Simplified Chinese, Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, and Persian) and evaluated on 10 languages (English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, Simplified Chinese). **Context length**: Command R+ 08-2024 supports a context length of 128K. ### Tool use & Agent capabilities: Command R+ 08-2024 has been specifically trained with conversational tool use capabilities. These have been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template will likely reduce performance, but we encourage experimentation. Command R+ 08-2024’s tool use functionality takes a conversation as input (with an optional user-system preamble), along with a list of available tools. The model will then generate a json-formatted list of actions to execute on a subset of those tools. Command R+ 08-2024 may use one of its supplied tools more than once. The model has been trained to recognise a special `directly_answer` tool, which it uses to indicate that it doesn’t want to use any of its other tools. The ability to abstain from calling a specific tool can be useful in a range of situations, such as greeting a user, or asking clarifying questions. We recommend including the `directly_answer` tool, but it can be removed or renamed if required. Comprehensive documentation for working with Command R+ 08-2024's tool use prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r). Command R+ 08-2024 also supports Hugging Face's [tool use API](https://huggingface.co/docs/transformers/main/en/chat_templating#advanced-tool-use--function-calling). The code snippets below show minimal working examples on how to render a prompt. <details> <summary><b>Usage: Rendering Tool Use Prompts [CLICK TO EXPAND]</b> </summary> ```python from transformers import AutoTokenizer model_id = "CohereForAI/c4ai-command-r-plus-08-2024" tokenizer = AutoTokenizer.from_pretrained(model_id) # define conversation input: conversation = [ {"role": "user", "content": "Whats the biggest penguin in the world?"} ] # Define tools available for the model to use: tools = [ { "name": "internet_search", "description": "Returns a list of relevant document snippets for a textual query retrieved from the internet", "parameter_definitions": { "query": { "description": "Query to search the internet with", "type": 'str', "required": True } } }, { 'name': "directly_answer", "description": "Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history", 'parameter_definitions': {} } ] # render the tool use prompt as a string: tool_use_prompt = tokenizer.apply_tool_use_template( conversation, tools=tools, tokenize=False, add_generation_prompt=True, ) print(tool_use_prompt) ``` </details> <details> <summary><b>Usage: Rendering prompts with the Tool Use API [CLICK TO EXPAND]</b> </summary> ```python from transformers import AutoTokenizer model_id = "CohereForAI/c4ai-command-r-plus-08-2024" tokenizer = AutoTokenizer.from_pretrained(model_id) # define conversation input: conversation = [ {"role": "user", "content": "Whats the biggest penguin in the world?"} ] # Define tools available for the model to use # Type hints and docstrings from Python functions are automatically extracted def internet_search(query: str): """ Returns a list of relevant document snippets for a textual query retrieved from the internet Args: query: Query to search the internet with """ pass def directly_answer(): """ Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history """ pass tools = [internet_search, directly_answer] # render the tool use prompt as a string: tool_use_prompt = tokenizer.apply_chat_template( conversation, tools=tools, tokenize=False, add_generation_prompt=True, ) print(tool_use_prompt) ``` </details> <details> <summary><b>Example Rendered Tool Use Prompt [CLICK TO EXPAND]</b></summary> ```` <BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral. # System Preamble ## Basic Rules You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions. # User Preamble ## Task and Context You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging. ## Style Guide Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling. ## Available Tools Here is a list of tools that you have available to you: ```python def internet_search(query: str) -> List[Dict]: """Returns a list of relevant document snippets for a textual query retrieved from the internet Args: query (str): Query to search the internet with """ pass ``` ```python def directly_answer() -> List[Dict]: """Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history """ pass ```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Write 'Action:' followed by a json-formatted list of actions that you want to perform in order to produce a good response to the user's last input. You can use any of the supplied tools any number of times, but you should aim to execute the minimum number of necessary actions for the input. You should use the `directly-answer` tool if calling the other tools is unnecessary. The list of actions you want to call should be formatted as a list of json objects, for example: ```json [ { "tool_name": title of the tool in the specification, "parameters": a dict of parameters to input into the tool as they are defined in the specs, or {} if it takes no parameters } ]```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> ```` </details> <details> <summary><b>Example Rendered Tool Use Completion [CLICK TO EXPAND]</b></summary> ```` Action: ```json [ { "tool_name": "internet_search", "parameters": { "query": "biggest penguin in the world" } } ] ``` ```` </details> ### Grounded Generation and RAG Capabilities: Command R+ 08-2024 has been specifically trained with grounded generation capabilities. This means that it can generate responses based on a list of supplied document snippets, and it will include grounding spans (citations) in its response indicating the source of the information. This can be used to enable behaviors such as grounded summarization and the final step of Retrieval Augmented Generation (RAG). This behavior has been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template may reduce performance, but we encourage experimentation. Command R+ 08-2024’s grounded generation behavior takes a conversation as input (with an optional user-supplied system preamble, indicating task, context and desired output style), along with a list of retrieved document snippets. The document snippets should be chunks, rather than long documents, typically around 100-400 words per chunk. Document snippets consist of key-value pairs. The keys should be short descriptive strings, the values can be text or semi-structured. By default, Command R+ 08-2024 will generate grounded responses by first predicting which documents are relevant, then predicting which ones it will cite, then generating an answer. Finally, it will then insert grounding spans into the answer. See below for an example. This is referred to as `accurate` grounded generation. The model is trained with a number of other answering modes, which can be selected by prompt changes. A `fast` citation mode is supported in the tokenizer, which will directly generate an answer with grounding spans in it, without first writing the answer out in full. This sacrifices some grounding accuracy in favor of generating fewer tokens. Comprehensive documentation for working with Command R+ 08-2024's grounded generation prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r). The code snippet below shows a minimal working example on how to render a prompt. <details> <summary> <b>Usage: Rendering Grounded Generation prompts [CLICK TO EXPAND]</b> </summary> ````python from transformers import AutoTokenizer model_id = "CohereForAI/c4ai-command-r-plus-08-2024" tokenizer = AutoTokenizer.from_pretrained(model_id) # define conversation input: conversation = [ {"role": "user", "content": "Whats the biggest penguin in the world?"} ] # define documents to ground on: documents = [ { "title": "Tall penguins", "text": "Emperor penguins are the tallest growing up to 122 cm in height." }, { "title": "Penguin habitats", "text": "Emperor penguins only live in Antarctica."} ] # render the tool use prompt as a string: grounded_generation_prompt = tokenizer.apply_grounded_generation_template( conversation, documents=documents, citation_mode="accurate", # or "fast" tokenize=False, add_generation_prompt=True, ) print(grounded_generation_prompt) ```` </details> <details> <summary><b>Example Rendered Grounded Generation Prompt [CLICK TO EXPAND]</b></summary> ```` <BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral. # System Preamble ## Basic Rules You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions. # User Preamble ## Task and Context You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging. ## Style Guide Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|><results> Document: 0 title: Tall penguins text: Emperor penguins are the tallest growing up to 122 cm in height. Document: 1 title: Penguin habitats text: Emperor penguins only live in Antarctica. </results><|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Carefully perform the following instructions, in order, starting each with a new line. Firstly, Decide which of the retrieved documents are relevant to the user's last input by writing 'Relevant Documents:' followed by comma-separated list of document numbers. If none are relevant, you should instead write 'None'. Secondly, Decide which of the retrieved documents contain facts that should be cited in a good answer to the user's last input by writing 'Cited Documents:' followed a comma-separated list of document numbers. If you dont want to cite any of them, you should instead write 'None'. Thirdly, Write 'Answer:' followed by a response to the user's last input in high quality natural english. Use the retrieved documents to help you. Do not insert any citations or grounding markup. Finally, Write 'Grounded answer:' followed by a response to the user's last input in high quality natural english. Use the symbols <co: doc> and </co: doc> to indicate when a fact comes from a document in the search result, e.g <co: 0>my fact</co: 0> for a fact from document 0.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> ```` </details> <details> <summary><b>Example Rendered Grounded Generation Completion [CLICK TO EXPAND]</b></summary> ```` Relevant Documents: 0,1 Cited Documents: 0,1 Answer: The Emperor Penguin is the tallest or biggest penguin in the world. It is a bird that lives only in Antarctica and grows to a height of around 122 centimetres. Grounded answer: The <co: 0>Emperor Penguin</co: 0> is the <co: 0>tallest</co: 0> or biggest penguin in the world. It is a bird that <co: 1>lives only in Antarctica</co: 1> and <co: 0>grows to a height of around 122 centimetres.</co: 0> ```` </details> ### Code Capabilities: Command R+ 08-2024 has been optimized to interact with your code, by requesting code snippets, code explanations, or code rewrites. It might not perform well out-of-the-box for pure code completion. For better performance, we also recommend using a low temperature (and even greedy decoding) for code-generation related instructions. ### Model Card Contact For errors or additional questions about details in this model card, contact [[email protected]](mailto:[email protected]). ### Terms of Use: We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant 104 billion parameter model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy). ### Try Chat: You can try Command R+ 08-2024 chat in the playground [here](https://dashboard.cohere.com/playground/chat). You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/c4ai-command?model=command-r-plus-08-2024).
{"language": ["en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar"], "library_name": "transformers", "license": "cc-by-nc-4.0", "inference": false, "extra_gated_prompt": "By submitting this form, you agree to the [License Agreement](https://cohere.com/c4ai-cc-by-nc-license) and acknowledge that the information you provide will be collected, used, and shared in accordance with Cohere’s [Privacy Policy]( https://cohere.com/privacy).", "extra_gated_fields": {"Name": "text", "Affiliation": "text", "Country": {"type": "select", "options": ["Aruba", "Afghanistan", "Angola", "Anguilla", "Åland Islands", "Albania", "Andorra", "United Arab Emirates", "Argentina", "Armenia", "American Samoa", "Antarctica", "French Southern Territories", "Antigua and Barbuda", "Australia", "Austria", "Azerbaijan", "Burundi", "Belgium", "Benin", "Bonaire Sint Eustatius and Saba", "Burkina Faso", "Bangladesh", "Bulgaria", "Bahrain", "Bahamas", "Bosnia and Herzegovina", "Saint Barthélemy", "Belarus", "Belize", "Bermuda", "Plurinational State of Bolivia", "Brazil", "Barbados", "Brunei-Darussalam", "Bhutan", "Bouvet-Island", "Botswana", "Central African Republic", "Canada", "Cocos (Keeling) Islands", "Switzerland", "Chile", "China", "Côte-dIvoire", "Cameroon", "Democratic Republic of the Congo", "Cook Islands", "Colombia", "Comoros", "Cabo Verde", "Costa Rica", "Cuba", "Curaçao", "Christmas Island", "Cayman Islands", "Cyprus", "Czechia", "Germany", "Djibouti", "Dominica", "Denmark", "Dominican Republic", "Algeria", "Ecuador", "Egypt", "Eritrea", "Western Sahara", "Spain", "Estonia", "Ethiopia", "Finland", "Fiji", "Falkland Islands (Malvinas)", "France", "Faroe Islands", "Federated States of Micronesia", "Gabon", "United Kingdom", "Georgia", "Guernsey", "Ghana", "Gibraltar", "Guinea", "Guadeloupe", "Gambia", "Guinea Bissau", "Equatorial Guinea", "Greece", "Grenada", "Greenland", "Guatemala", "French Guiana", "Guam", "Guyana", "Hong Kong", "Heard Island and McDonald Islands", "Honduras", "Croatia", "Haiti", "Hungary", "Indonesia", "Isle of Man", "India", "British Indian Ocean Territory", "Ireland", "Islamic Republic of Iran", "Iraq", "Iceland", "Israel", "Italy", "Jamaica", "Jersey", "Jordan", "Japan", "Kazakhstan", "Kenya", "Kyrgyzstan", "Cambodia", "Kiribati", "Saint-Kitts-and-Nevis", "South Korea", "Kuwait", "Lao-Peoples-Democratic-Republic", "Lebanon", "Liberia", "Libya", "Saint-Lucia", "Liechtenstein", "Sri Lanka", "Lesotho", "Lithuania", "Luxembourg", "Latvia", "Macao", "Saint Martin (French-part)", "Morocco", "Monaco", "Republic of Moldova", "Madagascar", "Maldives", "Mexico", "Marshall Islands", "North Macedonia", "Mali", "Malta", "Myanmar", "Montenegro", "Mongolia", "Northern Mariana Islands", "Mozambique", "Mauritania", "Montserrat", "Martinique", "Mauritius", "Malawi", "Malaysia", "Mayotte", "Namibia", "New Caledonia", "Niger", "Norfolk Island", "Nigeria", "Nicaragua", "Niue", "Netherlands", "Norway", "Nepal", "Nauru", "New Zealand", "Oman", "Pakistan", "Panama", "Pitcairn", "Peru", "Philippines", "Palau", "Papua New Guinea", "Poland", "Puerto Rico", "North Korea", "Portugal", "Paraguay", "State of Palestine", "French Polynesia", "Qatar", "Réunion", "Romania", "Russia", "Rwanda", "Saudi Arabia", "Sudan", "Senegal", "Singapore", "South Georgia and the South Sandwich Islands", "Saint Helena Ascension and Tristan da Cunha", "Svalbard and Jan Mayen", "Solomon Islands", "Sierra Leone", "El Salvador", "San Marino", "Somalia", "Saint Pierre and Miquelon", "Serbia", "South Sudan", "Sao Tome and Principe", "Suriname", "Slovakia", "Slovenia", "Sweden", "Eswatini", "Sint Maarten (Dutch-part)", "Seychelles", "Syrian Arab Republic", "Turks and Caicos Islands", "Chad", "Togo", "Thailand", "Tajikistan", "Tokelau", "Turkmenistan", "Timor Leste", "Tonga", "Trinidad and Tobago", "Tunisia", "Turkey", "Tuvalu", "Taiwan", "United Republic of Tanzania", "Uganda", "Ukraine", "United States Minor Outlying Islands", "Uruguay", "United-States", "Uzbekistan", "Holy See (Vatican City State)", "Saint Vincent and the Grenadines", "Bolivarian Republic of Venezuela", "Virgin Islands British", "Virgin Islands U.S.", "VietNam", "Vanuatu", "Wallis and Futuna", "Samoa", "Yemen", "South Africa", "Zambia", "Zimbabwe"]}, "Receive email updates on C4AI and Cohere research, events, products and services?": {"type": "select", "options": [true, false]}, "I agree to use this model for non-commercial use ONLY": "checkbox"}}
task
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
46,053
persiannlp/mt5-small-parsinlu-translation_en_fa
persiannlp
text2text-generation
[ "transformers", "pytorch", "mt5", "text2text-generation", "machine-translation", "persian", "farsi", "fa", "multilingual", "dataset:parsinlu", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05Z
2021-09-23T16:20:48+00:00
1,080
3
--- datasets: - parsinlu language: - fa - multilingual license: cc-by-nc-sa-4.0 metrics: - sacrebleu tags: - machine-translation - mt5 - persian - farsi thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg --- # Machine Translation (ترجمه‌ی ماشینی) This is an mT5-based model for machine translation (English -> Persian). Here is an example of how you can run this model: ```python from transformers import MT5ForConditionalGeneration, MT5Tokenizer model_size = "small" model_name = f"persiannlp/mt5-{model_size}-parsinlu-translation_en_fa" tokenizer = MT5Tokenizer.from_pretrained(model_name) model = MT5ForConditionalGeneration.from_pretrained(model_name) def run_model(input_string, **generator_args): input_ids = tokenizer.encode(input_string, return_tensors="pt") res = model.generate(input_ids, **generator_args) output = tokenizer.batch_decode(res, skip_special_tokens=True) print(output) return output run_model("Praise be to Allah, the Cherisher and Sustainer of the worlds;") run_model("shrouds herself in white and walks penitentially disguised as brotherly love through factories and parliaments; offers help, but desires power;") run_model("He thanked all fellow bloggers and organizations that showed support.") run_model("Races are held between April and December at the Veliefendi Hippodrome near Bakerky, 15 km (9 miles) west of Istanbul.") run_model("I want to pursue PhD in Computer Science about social network,what is the open problem in social networks?") ``` which should output: ``` ['برای الله، یعنی چرنده و سوزان دنیا، تحسین کنید'] ['خودش را در سفید پوسته می کند و به صورت عشق برادرانه'] ['او از تمام بلاگرها و سازمان هایی که حمایتشان را نشان می داد'] ['در طول ماه آوریل و دسامبر در والی فیودورونا نزدیک بیکر'] ['من می خواهم در مورد شبکه اجتماعی تحقیقات علوم کامپیوتری را دن'] ``` For more details, visit this page: https://github.com/persiannlp/parsinlu/
null
Non_BioNLP
# Machine Translation (ترجمه‌ی ماشینی) This is an mT5-based model for machine translation (English -> Persian). Here is an example of how you can run this model: ```python from transformers import MT5ForConditionalGeneration, MT5Tokenizer model_size = "small" model_name = f"persiannlp/mt5-{model_size}-parsinlu-translation_en_fa" tokenizer = MT5Tokenizer.from_pretrained(model_name) model = MT5ForConditionalGeneration.from_pretrained(model_name) def run_model(input_string, **generator_args): input_ids = tokenizer.encode(input_string, return_tensors="pt") res = model.generate(input_ids, **generator_args) output = tokenizer.batch_decode(res, skip_special_tokens=True) print(output) return output run_model("Praise be to Allah, the Cherisher and Sustainer of the worlds;") run_model("shrouds herself in white and walks penitentially disguised as brotherly love through factories and parliaments; offers help, but desires power;") run_model("He thanked all fellow bloggers and organizations that showed support.") run_model("Races are held between April and December at the Veliefendi Hippodrome near Bakerky, 15 km (9 miles) west of Istanbul.") run_model("I want to pursue PhD in Computer Science about social network,what is the open problem in social networks?") ``` which should output: ``` ['برای الله، یعنی چرنده و سوزان دنیا، تحسین کنید'] ['خودش را در سفید پوسته می کند و به صورت عشق برادرانه'] ['او از تمام بلاگرها و سازمان هایی که حمایتشان را نشان می داد'] ['در طول ماه آوریل و دسامبر در والی فیودورونا نزدیک بیکر'] ['من می خواهم در مورد شبکه اجتماعی تحقیقات علوم کامپیوتری را دن'] ``` For more details, visit this page: https://github.com/persiannlp/parsinlu/
{"datasets": ["parsinlu"], "language": ["fa", "multilingual"], "license": "cc-by-nc-sa-4.0", "metrics": ["sacrebleu"], "tags": ["machine-translation", "mt5", "persian", "farsi"], "thumbnail": "https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg"}
task
[ "TRANSLATION" ]
46,054
solidrust/bagel-dpo-7b-v0.5-AWQ
solidrust
text-generation
[ "transformers", "safetensors", "mistral", "text-generation", "finetuned", "quantized", "4-bit", "AWQ", "pytorch", "instruct", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "dataset:ai2_arc", "dataset:allenai/ultrafeedback_binarized_cleaned", "dataset:argilla/distilabel-intel-orca-dpo-pairs", "dataset:jondurbin/airoboros-3.2", "dataset:codeparrot/apps", "dataset:facebook/belebele", "dataset:bluemoon-fandom-1-1-rp-cleaned", "dataset:boolq", "dataset:camel-ai/biology", "dataset:camel-ai/chemistry", "dataset:camel-ai/math", "dataset:camel-ai/physics", "dataset:jondurbin/contextual-dpo-v0.1", "dataset:jondurbin/gutenberg-dpo-v0.1", "dataset:jondurbin/py-dpo-v0.1", "dataset:jondurbin/truthy-dpo-v0.1", "dataset:LDJnr/Capybara", "dataset:jondurbin/cinematika-v0.1", "dataset:WizardLM/WizardLM_evol_instruct_70k", "dataset:glaiveai/glaive-function-calling-v2", "dataset:grimulkan/LimaRP-augmented", "dataset:lmsys/lmsys-chat-1m", "dataset:ParisNeo/lollms_aware_dataset", "dataset:TIGER-Lab/MathInstruct", "dataset:Muennighoff/natural-instructions", "dataset:openbookqa", "dataset:kingbri/PIPPA-shareGPT", "dataset:piqa", "dataset:Vezora/Tested-22k-Python-Alpaca", "dataset:ropes", "dataset:cakiki/rosetta-code", "dataset:Open-Orca/SlimOrca", "dataset:b-mc2/sql-create-context", "dataset:squad_v2", "dataset:mattpscott/airoboros-summarization", "dataset:migtissera/Synthia-v1.3", "dataset:unalignment/toxic-dpo-v0.2", "dataset:WhiteRabbitNeo/WRN-Chapter-1", "dataset:WhiteRabbitNeo/WRN-Chapter-2", "dataset:winogrande", "base_model:jondurbin/bagel-dpo-7b-v0.4", "base_model:quantized:jondurbin/bagel-dpo-7b-v0.4", "license:apache-2.0", "awq" ]
2024-04-15T07:04:48Z
2024-09-03T08:08:51+00:00
11
0
--- base_model: jondurbin/bagel-dpo-7b-v0.4 datasets: - ai2_arc - allenai/ultrafeedback_binarized_cleaned - argilla/distilabel-intel-orca-dpo-pairs - jondurbin/airoboros-3.2 - codeparrot/apps - facebook/belebele - bluemoon-fandom-1-1-rp-cleaned - boolq - camel-ai/biology - camel-ai/chemistry - camel-ai/math - camel-ai/physics - jondurbin/contextual-dpo-v0.1 - jondurbin/gutenberg-dpo-v0.1 - jondurbin/py-dpo-v0.1 - jondurbin/truthy-dpo-v0.1 - LDJnr/Capybara - jondurbin/cinematika-v0.1 - WizardLM/WizardLM_evol_instruct_70k - glaiveai/glaive-function-calling-v2 - jondurbin/gutenberg-dpo-v0.1 - grimulkan/LimaRP-augmented - lmsys/lmsys-chat-1m - ParisNeo/lollms_aware_dataset - TIGER-Lab/MathInstruct - Muennighoff/natural-instructions - openbookqa - kingbri/PIPPA-shareGPT - piqa - Vezora/Tested-22k-Python-Alpaca - ropes - cakiki/rosetta-code - Open-Orca/SlimOrca - b-mc2/sql-create-context - squad_v2 - mattpscott/airoboros-summarization - migtissera/Synthia-v1.3 - unalignment/toxic-dpo-v0.2 - WhiteRabbitNeo/WRN-Chapter-1 - WhiteRabbitNeo/WRN-Chapter-2 - winogrande license: apache-2.0 model_name: bagel-dpo-7b-v0.5 pipeline_tag: text-generation tags: - finetuned - quantized - 4-bit - AWQ - transformers - pytorch - mistral - instruct - text-generation - conversational - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - finetune - chatml - DPO - RLHF - gpt4 - synthetic data - distillation quantized_by: Suparious model_creator: jondurbin inference: false prompt_template: '{bos}<|im_start|>{role} {text} <|im_end|>{eos} ' --- # jondurbin/bagel-dpo-7b-v0.5 AWQ - Model creator: [jondurbin](https://huggingface.co/jondurbin) - Original model: [bagel-dpo-7b-v0.4](https://huggingface.co/jondurbin/bagel-dpo-7b-v0.4) ![bagel](bagel.png) ## Model Summary This is a fine-tune of mistral-7b-v0.2 using the bagel v0.5 dataset, including a DPO pass. See [bagel](https://github.com/jondurbin/bagel) for additional details on the datasets. The non-DPO version is available [here](https://huggingface.co/jondurbin/bagel-7b-v0.5) ## How to use ### Install the necessary packages ```bash pip install --upgrade autoawq autoawq-kernels ``` ### Example Python code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer, TextStreamer model_path = "solidrust/bagel-dpo-7b-v0.5-AWQ" system_message = "You are Bagel, incarnated a powerful AI with everything." # Load model model = AutoAWQForCausalLM.from_quantized(model_path, fuse_layers=True) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) # Convert prompt to tokens prompt_template = """\ <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant""" prompt = "You're standing on the surface of the Earth. "\ "You walk one mile south, one mile west and one mile north. "\ "You end up exactly where you started. Where are you?" tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt), return_tensors='pt').input_ids.cuda() # Generate output generation_output = model.generate(tokens, streamer=streamer, max_new_tokens=512) ``` ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code ## Prompt template: ChatML ```plaintext <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ```
null
Non_BioNLP
# jondurbin/bagel-dpo-7b-v0.5 AWQ - Model creator: [jondurbin](https://huggingface.co/jondurbin) - Original model: [bagel-dpo-7b-v0.4](https://huggingface.co/jondurbin/bagel-dpo-7b-v0.4) ![bagel](bagel.png) ## Model Summary This is a fine-tune of mistral-7b-v0.2 using the bagel v0.5 dataset, including a DPO pass. See [bagel](https://github.com/jondurbin/bagel) for additional details on the datasets. The non-DPO version is available [here](https://huggingface.co/jondurbin/bagel-7b-v0.5) ## How to use ### Install the necessary packages ```bash pip install --upgrade autoawq autoawq-kernels ``` ### Example Python code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer, TextStreamer model_path = "solidrust/bagel-dpo-7b-v0.5-AWQ" system_message = "You are Bagel, incarnated a powerful AI with everything." # Load model model = AutoAWQForCausalLM.from_quantized(model_path, fuse_layers=True) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) # Convert prompt to tokens prompt_template = """\ <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant""" prompt = "You're standing on the surface of the Earth. "\ "You walk one mile south, one mile west and one mile north. "\ "You end up exactly where you started. Where are you?" tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt), return_tensors='pt').input_ids.cuda() # Generate output generation_output = model.generate(tokens, streamer=streamer, max_new_tokens=512) ``` ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code ## Prompt template: ChatML ```plaintext <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ```
{"base_model": "jondurbin/bagel-dpo-7b-v0.4", "datasets": ["ai2_arc", "allenai/ultrafeedback_binarized_cleaned", "argilla/distilabel-intel-orca-dpo-pairs", "jondurbin/airoboros-3.2", "codeparrot/apps", "facebook/belebele", "bluemoon-fandom-1-1-rp-cleaned", "boolq", "camel-ai/biology", "camel-ai/chemistry", "camel-ai/math", "camel-ai/physics", "jondurbin/contextual-dpo-v0.1", "jondurbin/gutenberg-dpo-v0.1", "jondurbin/py-dpo-v0.1", "jondurbin/truthy-dpo-v0.1", "LDJnr/Capybara", "jondurbin/cinematika-v0.1", "WizardLM/WizardLM_evol_instruct_70k", "glaiveai/glaive-function-calling-v2", "jondurbin/gutenberg-dpo-v0.1", "grimulkan/LimaRP-augmented", "lmsys/lmsys-chat-1m", "ParisNeo/lollms_aware_dataset", "TIGER-Lab/MathInstruct", "Muennighoff/natural-instructions", "openbookqa", "kingbri/PIPPA-shareGPT", "piqa", "Vezora/Tested-22k-Python-Alpaca", "ropes", "cakiki/rosetta-code", "Open-Orca/SlimOrca", "b-mc2/sql-create-context", "squad_v2", "mattpscott/airoboros-summarization", "migtissera/Synthia-v1.3", "unalignment/toxic-dpo-v0.2", "WhiteRabbitNeo/WRN-Chapter-1", "WhiteRabbitNeo/WRN-Chapter-2", "winogrande"], "license": "apache-2.0", "model_name": "bagel-dpo-7b-v0.5", "pipeline_tag": "text-generation", "tags": ["finetuned", "quantized", "4-bit", "AWQ", "transformers", "pytorch", "mistral", "instruct", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation"], "quantized_by": "Suparious", "model_creator": "jondurbin", "inference": false, "prompt_template": "{bos}<|im_start|>{role}\n{text}\n<|im_end|>{eos} "}
task
[ "SUMMARIZATION" ]
46,055
deepset/tinybert-6l-768d-squad2
deepset
question-answering
[ "transformers", "pytorch", "safetensors", "bert", "question-answering", "exbert", "en", "dataset:squad_v2", "arxiv:1909.10351", "license:mit", "model-index", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05Z
2024-09-26T08:09:06+00:00
125
1
--- datasets: - squad_v2 language: en license: mit tags: - exbert thumbnail: https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg model-index: - name: deepset/tinybert-6l-768d-squad2 results: - task: type: question-answering name: Question Answering dataset: name: squad_v2 type: squad_v2 config: squad_v2 split: validation metrics: - type: exact_match value: 73.8248 name: Exact Match verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGFmZmFiN2E5ODZkOTkyMjQ1NTUzMmQwMjc0M2RlYzVlNmM4YTFlNzA4YzIwY2JkY2EyNDg2ZTY3OTdjZTVlZiIsInZlcnNpb24iOjF9.ZZ6c2OI3lzeNhuSWTh28j00zk-sPrqkTvdVBZv2wJc1D4YnR-xOj72haybT6MV_xeYqTg3-x9L8PsWSS20NaDw - type: f1 value: 77.1684 name: F1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzAxMDk1YzI5ZjA2N2ZmMzAxNjgxYzJiNzAzYmI1ZWU5ZDRmYWY3OWJmMjlmNDcyMGE0YWY5NjNhZTk4YWY5ZSIsInZlcnNpb24iOjF9.rF3raNGUSYv5D2xzWLZztD99vwDKvWb22LG32RomrDGP6XKTbCVqZzAw5UFw93jKb0VoLApbQQ-AOGxLj3U_Cg --- # tinybert for Extractive QA ## Overview **Language model:** deepset/tinybert-6L-768D-squad2 **Language:** English **Training data:** SQuAD 2.0 training set x 20 augmented + SQuAD 2.0 training set without augmentation **Eval data:** SQuAD 2.0 dev set **Code:** See [an example extractive QA pipeline built with Haystack](https://haystack.deepset.ai/tutorials/34_extractive_qa_pipeline) **Infrastructure**: 1x V100 GPU **Published**: Dec 8th, 2021 ## Details - Haystack's intermediate layer and prediction layer distillation features were used for training (based on [TinyBERT](https://arxiv.org/pdf/1909.10351.pdf)). deepset/bert-base-uncased-squad2 was used as the teacher model and huawei-noah/TinyBERT_General_6L_768D was used as the student model. ## Hyperparameters ### Intermediate layer distillation ``` batch_size = 26 n_epochs = 5 max_seq_len = 384 learning_rate = 5e-5 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 temperature = 1 ``` ### Prediction layer distillation ``` batch_size = 26 n_epochs = 5 max_seq_len = 384 learning_rate = 3e-5 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 temperature = 1 distillation_loss_weight = 1.0 ``` ## Usage ### In Haystack Haystack is an AI orchestration framework to build customizable, production-ready LLM applications. You can use this model in Haystack to do extractive question answering on documents. To load and run the model with [Haystack](https://github.com/deepset-ai/haystack/): ```python # After running pip install haystack-ai "transformers[torch,sentencepiece]" from haystack import Document from haystack.components.readers import ExtractiveReader docs = [ Document(content="Python is a popular programming language"), Document(content="python ist eine beliebte Programmiersprache"), ] reader = ExtractiveReader(model="deepset/tinybert-6l-768d-squad2") reader.warm_up() question = "What is a popular programming language?" result = reader.run(query=question, documents=docs) # {'answers': [ExtractedAnswer(query='What is a popular programming language?', score=0.5740374326705933, data='python', document=Document(id=..., content: '...'), context=None, document_offset=ExtractedAnswer.Span(start=0, end=6),...)]} ``` For a complete example with an extractive question answering pipeline that scales over many documents, check out the [corresponding Haystack tutorial](https://haystack.deepset.ai/tutorials/34_extractive_qa_pipeline). ### In Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "deepset/tinybert-6l-768d-squad2" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Performance ``` "exact": 71.87736882001179 "f1": 76.36111895973675 ``` ## Authors - Timo Möller: `timo.moeller [at] deepset.ai` - Julian Risch: `julian.risch [at] deepset.ai` - Malte Pietsch: `malte.pietsch [at] deepset.ai` - Michel Bartels: `michel.bartels [at] deepset.ai` ## About us <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3"> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/> </div> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/> </div> </div> [deepset](http://deepset.ai/) is the company behind the production-ready open-source AI framework [Haystack](https://haystack.deepset.ai/). Some of our other work: - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")](https://huggingface.co/deepset/tinyroberta-squad2) - [German BERT](https://deepset.ai/german-bert), [GermanQuAD and GermanDPR](https://deepset.ai/germanquad), [German embedding model](https://huggingface.co/mixedbread-ai/deepset-mxbai-embed-de-large-v1) - [deepset Cloud](https://www.deepset.ai/deepset-cloud-product), [deepset Studio](https://www.deepset.ai/deepset-studio) ## Get in touch and join the Haystack community <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>. We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p> [Twitter](https://twitter.com/Haystack_AI) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://haystack.deepset.ai/) | [YouTube](https://www.youtube.com/@deepset_ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
null
Non_BioNLP
# tinybert for Extractive QA ## Overview **Language model:** deepset/tinybert-6L-768D-squad2 **Language:** English **Training data:** SQuAD 2.0 training set x 20 augmented + SQuAD 2.0 training set without augmentation **Eval data:** SQuAD 2.0 dev set **Code:** See [an example extractive QA pipeline built with Haystack](https://haystack.deepset.ai/tutorials/34_extractive_qa_pipeline) **Infrastructure**: 1x V100 GPU **Published**: Dec 8th, 2021 ## Details - Haystack's intermediate layer and prediction layer distillation features were used for training (based on [TinyBERT](https://arxiv.org/pdf/1909.10351.pdf)). deepset/bert-base-uncased-squad2 was used as the teacher model and huawei-noah/TinyBERT_General_6L_768D was used as the student model. ## Hyperparameters ### Intermediate layer distillation ``` batch_size = 26 n_epochs = 5 max_seq_len = 384 learning_rate = 5e-5 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 temperature = 1 ``` ### Prediction layer distillation ``` batch_size = 26 n_epochs = 5 max_seq_len = 384 learning_rate = 3e-5 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 temperature = 1 distillation_loss_weight = 1.0 ``` ## Usage ### In Haystack Haystack is an AI orchestration framework to build customizable, production-ready LLM applications. You can use this model in Haystack to do extractive question answering on documents. To load and run the model with [Haystack](https://github.com/deepset-ai/haystack/): ```python # After running pip install haystack-ai "transformers[torch,sentencepiece]" from haystack import Document from haystack.components.readers import ExtractiveReader docs = [ Document(content="Python is a popular programming language"), Document(content="python ist eine beliebte Programmiersprache"), ] reader = ExtractiveReader(model="deepset/tinybert-6l-768d-squad2") reader.warm_up() question = "What is a popular programming language?" result = reader.run(query=question, documents=docs) # {'answers': [ExtractedAnswer(query='What is a popular programming language?', score=0.5740374326705933, data='python', document=Document(id=..., content: '...'), context=None, document_offset=ExtractedAnswer.Span(start=0, end=6),...)]} ``` For a complete example with an extractive question answering pipeline that scales over many documents, check out the [corresponding Haystack tutorial](https://haystack.deepset.ai/tutorials/34_extractive_qa_pipeline). ### In Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "deepset/tinybert-6l-768d-squad2" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Performance ``` "exact": 71.87736882001179 "f1": 76.36111895973675 ``` ## Authors - Timo Möller: `timo.moeller [at] deepset.ai` - Julian Risch: `julian.risch [at] deepset.ai` - Malte Pietsch: `malte.pietsch [at] deepset.ai` - Michel Bartels: `michel.bartels [at] deepset.ai` ## About us <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3"> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/> </div> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/> </div> </div> [deepset](http://deepset.ai/) is the company behind the production-ready open-source AI framework [Haystack](https://haystack.deepset.ai/). Some of our other work: - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")](https://huggingface.co/deepset/tinyroberta-squad2) - [German BERT](https://deepset.ai/german-bert), [GermanQuAD and GermanDPR](https://deepset.ai/germanquad), [German embedding model](https://huggingface.co/mixedbread-ai/deepset-mxbai-embed-de-large-v1) - [deepset Cloud](https://www.deepset.ai/deepset-cloud-product), [deepset Studio](https://www.deepset.ai/deepset-studio) ## Get in touch and join the Haystack community <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>. We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p> [Twitter](https://twitter.com/Haystack_AI) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://haystack.deepset.ai/) | [YouTube](https://www.youtube.com/@deepset_ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
{"datasets": ["squad_v2"], "language": "en", "license": "mit", "tags": ["exbert"], "thumbnail": "https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg", "model-index": [{"name": "deepset/tinybert-6l-768d-squad2", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "config": "squad_v2", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 73.8248, "name": "Exact Match", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGFmZmFiN2E5ODZkOTkyMjQ1NTUzMmQwMjc0M2RlYzVlNmM4YTFlNzA4YzIwY2JkY2EyNDg2ZTY3OTdjZTVlZiIsInZlcnNpb24iOjF9.ZZ6c2OI3lzeNhuSWTh28j00zk-sPrqkTvdVBZv2wJc1D4YnR-xOj72haybT6MV_xeYqTg3-x9L8PsWSS20NaDw"}, {"type": "f1", "value": 77.1684, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzAxMDk1YzI5ZjA2N2ZmMzAxNjgxYzJiNzAzYmI1ZWU5ZDRmYWY3OWJmMjlmNDcyMGE0YWY5NjNhZTk4YWY5ZSIsInZlcnNpb24iOjF9.rF3raNGUSYv5D2xzWLZztD99vwDKvWb22LG32RomrDGP6XKTbCVqZzAw5UFw93jKb0VoLApbQQ-AOGxLj3U_Cg"}]}]}]}
task
[ "QUESTION_ANSWERING" ]
46,056
mjtechguy/phi-4-multimodal-instruct
mjtechguy
automatic-speech-recognition
[ "transformers", "safetensors", "phi4mm", "text-generation", "nlp", "code", "audio", "automatic-speech-recognition", "speech-summarization", "speech-translation", "visual-question-answering", "phi-4-multimodal", "phi", "phi-4-mini", "custom_code", "multilingual", "ar", "zh", "cs", "da", "nl", "en", "fi", "fr", "de", "he", "hu", "it", "ja", "ko", "no", "pl", "pt", "ru", "es", "sv", "th", "tr", "uk", "arxiv:2407.13833", "license:mit", "autotrain_compatible", "region:us" ]
2025-02-28T01:04:42Z
2025-02-28T01:04:43+00:00
14
0
--- language: - multilingual - ar - zh - cs - da - nl - en - fi - fr - de - he - hu - it - ja - ko - false - pl - pt - ru - es - sv - th - tr - uk library_name: transformers license: mit license_link: https://huggingface.co/microsoft/Phi-4-multimodal-instruct/resolve/main/LICENSE tags: - nlp - code - audio - automatic-speech-recognition - speech-summarization - speech-translation - visual-question-answering - phi-4-multimodal - phi - phi-4-mini widget: - example_title: Librispeech sample 1 src: https://cdn-media.huggingface.co/speech_samples/sample1.flac - example_title: Librispeech sample 2 src: https://cdn-media.huggingface.co/speech_samples/sample2.flac - messages: - role: user content: Can you provide ways to eat combinations of bananas and dragonfruits? --- ## Model Summary Phi-4-multimodal-instruct is a lightweight open multimodal foundation model that leverages the language, vision, and speech research and datasets used for Phi-3.5 and 4.0 models. The model processes text, image, and audio inputs, generating text outputs, and comes with 128K token context length. The model underwent an enhancement process, incorporating both supervised fine-tuning, direct preference optimization and RLHF (Reinforcement Learning from Human Feedback) to support precise instruction adherence and safety measures. The languages that each modal supports are the following: - Text: Arabic, Chinese, Czech, Danish, Dutch, English, Finnish, French, German, Hebrew, Hungarian, Italian, Japanese, Korean, Norwegian, Polish, Portuguese, Russian, Spanish, Swedish, Thai, Turkish, Ukrainian - Vision: English - Audio: English, Chinese, German, French, Italian, Japanese, Spanish, Portuguese 📰 [Phi-4-multimodal Microsoft Blog](https://aka.ms/phi4-feb2025) <br> 📖 [Phi-4-multimodal Technical Report](https://aka.ms/phi-4-multimodal/techreport) <br> 🏡 [Phi Portal](https://aka.ms/phi-4-multimodal/azure) <br> 👩‍🍳 [Phi Cookbook](https://github.com/microsoft/PhiCookBook) <br> 🖥️ Try It on [Azure](https://aka.ms/phi-4-multimodal/azure), [Nvidia Playgroud](https://aka.ms/phi-4-multimodal/nvidia) <br> 📱Huggingface Spaces [Thoughts Organizer](https://huggingface.co/spaces/microsoft/ThoughtsOrganizer), [Stories Come Alive](https://huggingface.co/spaces/microsoft/StoriesComeAlive), [Phine Speech Translator](https://huggingface.co/spaces/microsoft/PhineSpeechTranslator) <br> **Phi-4**: [[multimodal-instruct](https://huggingface.co/microsoft/Phi-4-multimodal-instruct) | [onnx](https://huggingface.co/microsoft/Phi-4-multimodal-instruct-onnx)]; [mini-instruct](https://huggingface.co/microsoft/Phi-4-mini-instruct); Watch as Phi-4 Multimodal analyzes spoken language to help plan a trip to Seattle, demonstrating its advanced audio processing and recommendation capabilities. <div style="width: 800px; height: 400px; margin: 0 auto;"> <video autoplay muted loop controls playsinline style="width: 100%; height: 100%; object-fit: contain;"> <source src="https://phi4releasestorage.blob.core.windows.net/demo/Phi-4-multimodal_SeattleTrip.mp4" type="video/mp4"> Your browser does not support the video tag. </video> </div> See how Phi-4 Multimodal tackles complex mathematical problems through visual inputs, demonstrating its ability to process and solve equations presented in images. <div style="width: 800px; height: 400px; margin: 0 auto;"> <video autoplay muted loop controls playsinline style="width: 100%; height: 100%; object-fit: contain;"> <source src="https://phi4releasestorage.blob.core.windows.net/demo/Phi-4-multimodal_Math.mp4" type="video/mp4"> Your browser does not support the video tag. </video> </div> Explore how Phi-4 Mini functions as an intelligent agent, showcasing its reasoning and task execution abilities in complex scenarios. <div style="width: 800px; height: 400px; margin: 0 auto;"> <video autoplay muted loop controls playsinline style="width: 100%; height: 100%; object-fit: contain;"> <source src="https://phi4releasestorage.blob.core.windows.net/demo/Phi-4-mini_Agents.mp4" type="video/mp4"> Your browser does not support the video tag. </video> </div> ## Intended Uses ### Primary Use Cases The model is intended for broad multilingual and multimodal commercial and research use . The model provides uses for general purpose AI systems and applications which require 1) Memory/compute constrained environments 2) Latency bound scenarios 3) Strong reasoning (especially math and logic) 4) Function and tool calling 5) General image understanding 6) Optical character recognition 7) Chart and table understanding 8) Multiple image comparison 9) Multi-image or video clip summarization 10) Speech recognition 11) Speech translation 12) Speech QA 13) Speech summarization 14) Audio understanding The model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. ### Use Case Considerations The model is not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models and multimodal models, as well as performance difference across languages, as they select use cases, and evaluate and mitigate for accuracy, safety, and fairness before using within a specific downstream use case, particularly for high-risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including but not limited to privacy, trade compliance laws, etc.) that are relevant to their use case. ***Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.*** ## Release Notes This release of Phi-4-multimodal-instruct is based on valuable user feedback from the Phi-3 series. Previously, users could use a speech recognition model to talk to the Mini and Vision models. To achieve this, users needed to use a pipeline of two models: one model to transcribe the audio to text, and another model for the language or vision tasks. This pipeline means that the core model was not provided the full breadth of input information – e.g. cannot directly observe multiple speakers, background noises, jointly align speech, vision, language information at the same time on the same representation space. With Phi-4-multimodal-instruct, a single new open model has been trained across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network. The model employed new architecture, larger vocabulary for efficiency, multilingual, and multimodal support, and better post-training techniques were used for instruction following and function calling, as well as additional data leading to substantial gains on key multimodal capabilities. It is anticipated that Phi-4-multimodal-instruct will greatly benefit app developers and various use cases. The enthusiastic support for the Phi-4 series is greatly appreciated. Feedback on Phi-4 is welcomed and crucial to the model's evolution and improvement. Thank you for being part of this journey! ## Model Quality To understand the capabilities, Phi-4-multimodal-instruct was compared with a set of models over a variety of benchmarks using an internal benchmark platform (See Appendix A for benchmark methodology). Users can refer to the Phi-4-Mini-Instruct model card for details of language benchmarks. At the high-level overview of the model quality on representative speech and vision benchmarks: ### Speech The Phi-4-multimodal-instruct was observed as - Having strong automatic speech recognition (ASR) and speech translation (ST) performance, surpassing expert ASR model WhisperV3 and ST models SeamlessM4T-v2-Large. - Ranking number 1 on the Huggingface OpenASR leaderboard with word error rate 6.14% in comparison with the current best model 6.5% as of Jan 17, 2025. - Being the first open-sourced model that can perform speech summarization, and the performance is close to GPT4o. - Having a gap with close models, e.g. Gemini-1.5-Flash and GPT-4o-realtime-preview, on speech QA task. Work is being undertaken to improve this capability in the next iterations. #### Speech Recognition (lower is better) The performance of Phi-4-multimodal-instruct on the aggregated benchmark datasets: ![alt text](./figures/speech_recognition.png) The performance of Phi-4-multimodal-instruct on different languages, averaging the WERs of CommonVoice and FLEURS: ![alt text](./figures/speech_recog_by_lang.png) #### Speech Translation (higher is better) Translating from German, Spanish, French, Italian, Japanese, Portugues, Chinese to English: ![alt text](./figures/speech_translate.png) Translating from English to German, Spanish, French, Italian, Japanese, Portugues, Chinese. Noted that WhiperV3 does not support this capability: ![alt text](./figures/speech_translate_2.png) #### Speech Summarization (higher is better) ![alt text](./figures/speech_summarization.png) #### Speech QA MT bench scores are scaled by 10x to match the score range of MMMLU: ![alt text](./figures/speech_qa.png) #### Audio Understanding AIR bench scores are scaled by 10x to match the score range of MMAU: ![alt text](./figures/audio_understand.png) ### Vision #### Vision-Speech tasks Phi-4-multimodal-instruct is capable of processing both image and audio together, the following table shows the model quality when the input query for vision content is synthetic speech on chart/table understanding and document reasoning tasks. Compared to other existing state-of-the-art omni models that can enable audio and visual signal as input, Phi-4-multimodal-instruct achieves much stronger performance on multiple benchmarks. | Benchmarks | Phi-4-multimodal-instruct | InternOmni-7B | Gemini-2.0-Flash-Lite-prv-02-05 | Gemini-2.0-Flash | Gemini-1.5-Pro | |-----------------------|--------------------------|---------------|--------------------------------|-----------------|----------------| | s_AI2D | **68.9** | 53.9 | 62.0 | **69.4** | 67.7 | | s_ChartQA | **69.0** | 56.1 | 35.5 | 51.3 | 46.9 | | s_DocVQA | **87.3** | 79.9 | 76.0 | 80.3 | 78.2 | | s_InfoVQA | **63.7** | 60.3 | 59.4 | 63.6 | **66.1** | | **Average** | **72.2** | **62.6** | **58.2** | **66.2** | **64.7** | ### Vision tasks To understand the vision capabilities, Phi-4-multimodal-instruct was compared with a set of models over a variety of zero-shot benchmarks using an internal benchmark platform. At the high-level overview of the model quality on representative benchmarks: | Dataset | Phi-4-multimodal-ins | Phi-3.5-vision-ins | Qwen 2.5-VL-3B-ins | Intern VL 2.5-4B | Qwen 2.5-VL-7B-ins | Intern VL 2.5-8B | Gemini 2.0-Flash Lite-preview-0205 | Gemini2.0-Flash | Claude-3.5-Sonnet-2024-10-22 | Gpt-4o-2024-11-20 | |----------------------------------|---------------------|-------------------|-------------------|-----------------|-------------------|-----------------|--------------------------------|-----------------|----------------------------|------------------| | **Popular aggregated benchmark** | | | | | | | | | | | | MMMU | **55.1** | 43.0 | 47.0 | 48.3 | 51.8 | 50.6 | 54.1 | **64.7** | 55.8 | 61.7 | | MMBench (dev-en) | **86.7** | 81.9 | 84.3 | 86.8 | 87.8 | 88.2 | 85.0 | **90.0** | 86.7 | 89.0 | | MMMU-Pro (std/vision) | **38.5** | 21.8 | 29.9 | 32.4 | 36.9 | 34.4 | 45.1 | **54.4** | 54.3 | 53.0 | | **Visual science reasoning** | | | | | | | | | | | | ScienceQA Visual (img-test) | **97.5** | 91.3 | 79.4 | 96.2 | 87.7 | **97.3** | 85.0 | 88.3 | 81.2 | 88.2 | | **Visual math reasoning** | | | | | | | | | | | | MathVista (testmini) | **62.4** | 43.9 | 60.8 | 51.2 | **67.8** | 56.7 | 57.6 | 47.2 | 56.9 | 56.1 | | InterGPS | **48.6** | 36.3 | 48.3 | 53.7 | 52.7 | 54.1 | 57.9 | **65.4** | 47.1 | 49.1 | | **Chart & table reasoning** | | | | | | | | | | | | AI2D | **82.3** | 78.1 | 78.4 | 80.0 | 82.6 | 83.0 | 77.6 | 82.1 | 70.6 | **83.8** | | ChartQA | **81.4** | 81.8 | 80.0 | 79.1 | **85.0** | 81.0 | 73.0 | 79.0 | 78.4 | 75.1 | | DocVQA | **93.2** | 69.3 | 93.9 | 91.6 | **95.7** | 93.0 | 91.2 | 92.1 | 95.2 | 90.9 | | InfoVQA | **72.7** | 36.6 | 77.1 | 72.1 | **82.6** | 77.6 | 73.0 | 77.8 | 74.3 | 71.9 | | **Document Intelligence** | | | | | | | | | | | | TextVQA (val) | **75.6** | 72.0 | 76.8 | 70.9 | **77.7** | 74.8 | 72.9 | 74.4 | 58.6 | 73.1 | | OCR Bench | **84.4** | 63.8 | 82.2 | 71.6 | **87.7** | 74.8 | 75.7 | 81.0 | 77.0 | 77.7 | | **Object visual presence verification** | | | | | | | | | | | | POPE | **85.6** | 86.1 | 87.9 | 89.4 | 87.5 | **89.1** | 87.5 | 88.0 | 82.6 | 86.5 | | **Multi-image perception** | | | | | | | | | | | | BLINK | **61.3** | 57.0 | 48.1 | 51.2 | 55.3 | 52.5 | 59.3 | **64.0** | 56.9 | 62.4 | | Video MME 16 frames | **55.0** | 50.8 | 56.5 | 57.3 | 58.2 | 58.7 | 58.8 | 65.5 | 60.2 | **68.2** | | **Average** | **72.0** | **60.9** | **68.7** | **68.8** | **73.1** | **71.1** | **70.2** | **74.3** | **69.1** | **72.4** | ![alt text](./figures/vision_radar.png) #### Visual Perception Below are the comparison results on existing multi-image tasks. On average, Phi-4-multimodal-instruct outperforms competitor models of the same size and competitive with much bigger models on multi-frame capabilities. BLINK is an aggregated benchmark with 14 visual tasks that humans can solve very quickly but are still hard for current multimodal LLMs. | Dataset | Phi-4-multimodal-instruct | Qwen2.5-VL-3B-Instruct | InternVL 2.5-4B | Qwen2.5-VL-7B-Instruct | InternVL 2.5-8B | Gemini-2.0-Flash-Lite-prv-02-05 | Gemini-2.0-Flash | Claude-3.5-Sonnet-2024-10-22 | Gpt-4o-2024-11-20 | |----------------------------|--------------------------|----------------------|-----------------|----------------------|-----------------|--------------------------------|-----------------|----------------------------|------------------| | Art Style | **86.3** | 58.1 | 59.8 | 65.0 | 65.0 | 76.9 | 76.9 | 68.4 | 73.5 | | Counting | **60.0** | 67.5 | 60.0 | 66.7 | **71.7** | 45.8 | 69.2 | 60.8 | 65.0 | | Forensic Detection | **90.2** | 34.8 | 22.0 | 43.9 | 37.9 | 31.8 | 74.2 | 63.6 | 71.2 | | Functional Correspondence | **30.0** | 20.0 | 26.9 | 22.3 | 27.7 | 48.5 | **53.1** | 34.6 | 42.3 | | IQ Test | **22.7** | 25.3 | 28.7 | 28.7 | 28.7 | 28.0 | **30.7** | 20.7 | 25.3 | | Jigsaw | **68.7** | 52.0 | **71.3** | 69.3 | 53.3 | 62.7 | 69.3 | 61.3 | 68.7 | | Multi-View Reasoning | **76.7** | 44.4 | 44.4 | 54.1 | 45.1 | 55.6 | 41.4 | 54.9 | 54.1 | | Object Localization | **52.5** | 55.7 | 53.3 | 55.7 | 58.2 | 63.9 | **67.2** | 58.2 | 65.6 | | Relative Depth | **69.4** | 68.5 | 68.5 | 80.6 | 76.6 | **81.5** | 72.6 | 66.1 | 73.4 | | Relative Reflectance | **26.9** | **38.8** | **38.8** | 32.8 | **38.8** | 33.6 | 34.3 | 38.1 | 38.1 | | Semantic Correspondence | **52.5** | 32.4 | 33.8 | 28.8 | 24.5 | **56.1** | 55.4 | 43.9 | 47.5 | | Spatial Relation | **72.7** | 80.4 | 86.0 | **88.8** | 86.7 | 74.1 | 79.0 | 74.8 | 83.2 | | Visual Correspondence | **67.4** | 28.5 | 39.5 | 50.0 | 44.2 | 84.9 | **91.3** | 72.7 | 82.6 | | Visual Similarity | **86.7** | 67.4 | 88.1 | 87.4 | 85.2 | **87.4** | 80.7 | 79.3 | 83.0 | | **Overall** | **61.6** | **48.1** | **51.2** | **55.3** | **52.5** | **59.3** | **64.0** | **56.9** | **62.4** | ![alt text](./figures/multi_image.png) ## Usage ### Requirements Phi-4 family has been integrated in the `4.48.2` version of `transformers`. The current `transformers` version can be verified with: `pip list | grep transformers`. Examples of required packages: ``` flash_attn==2.7.4.post1 torch==2.6.0 transformers==4.48.2 accelerate==1.3.0 soundfile==0.13.1 pillow==11.1.0 scipy==1.15.2 torchvision==0.21.0 backoff==2.2.1 peft==0.13.2 ``` Phi-4-multimodal-instruct is also available in [Azure AI Studio](https://aka.ms/phi-4-multimodal/azure) ### Tokenizer Phi-4-multimodal-instruct supports a vocabulary size of up to `200064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-4-multimodal-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size. ### Input Formats Given the nature of the training data, the Phi-4-multimodal-instruct model is best suited for prompts using the chat format as follows: #### Text chat format This format is used for general conversation and instructions: ` <|system|>You are a helpful assistant.<|end|><|user|>How to explain Internet for a medieval knight?<|end|><|assistant|> ` #### Tool-enabled function-calling format This format is used when the user wants the model to provide function calls based on the given tools. The user should provide the available tools in the system prompt, wrapped by <|tool|> and <|/tool|> tokens. The tools should be specified in JSON format, using a JSON dump structure. Example: ` <|system|>You are a helpful assistant with some tools.<|tool|>[{"name": "get_weather_updates", "description": "Fetches weather updates for a given city using the RapidAPI Weather API.", "parameters": {"city": {"description": "The name of the city for which to retrieve weather information.", "type": "str", "default": "London"}}}]<|/tool|><|end|><|user|>What is the weather like in Paris today?<|end|><|assistant|> ` #### Vision-Language Format This format is used for conversation with image: ` <|user|><|image_1|>Describe the image in detail.<|end|><|assistant|> ` For multiple images, the user needs to insert multiple image placeholders in the prompt as below: ` <|user|><|image_1|><|image_2|><|image_3|>Summarize the content of the images.<|end|><|assistant|> ` #### Speech-Language Format This format is used for various speech and audio tasks: ` <|user|><|audio_1|>{task prompt}<|end|><|assistant|> ` The task prompt can vary for different task. Automatic Speech Recognition: ` <|user|><|audio_1|>Transcribe the audio clip into text.<|end|><|assistant|> ` Automatic Speech Translation: ` <|user|><|audio_1|>Translate the audio to {lang}.<|end|><|assistant|> ` Automatic Speech Translation with chain-of-thoughts: ` <|user|><|audio_1|>Transcribe the audio to text, and then translate the audio to {lang}. Use <sep> as a separator between the original transcript and the translation.<|end|><|assistant|> ` Spoken-query Question Answering: ` <|user|><|audio_1|><|end|><|assistant|> ` #### Vision-Speech Format This format is used for conversation with image and audio. The audio may contain query related to the image: ` <|user|><|image_1|><|audio_1|><|end|><|assistant|> ` For multiple images, the user needs to insert multiple image placeholders in the prompt as below: ` <|user|><|image_1|><|image_2|><|image_3|><|audio_1|><|end|><|assistant|> ` **Vision** - Any common RGB/gray image format (e.g., (".jpg", ".jpeg", ".png", ".ppm", ".bmp", ".pgm", ".tif", ".tiff", ".webp")) can be supported. - Resolution depends on the GPU memory size. Higher resolution and more images will produce more tokens, thus using more GPU memory. During training, 64 crops can be supported. If it is a square image, the resolution would be around (8*448 by 8*448). For multiple-images, at most 64 frames can be supported, but with more frames as input, the resolution of each frame needs to be reduced to fit in the memory. **Audio** - Any audio format that can be loaded by soundfile package should be supported. - To keep the satisfactory performance, maximum audio length is suggested to be 40s. For summarization tasks, the maximum audio length is suggested to 30 mins. ### Loading the model locally After obtaining the Phi-4-multimodal-instruct model checkpoints, users can use this sample code for inference. ```python import requests import torch import os import io from PIL import Image import soundfile as sf from transformers import AutoModelForCausalLM, AutoProcessor, GenerationConfig from urllib.request import urlopen # Define model path model_path = "microsoft/Phi-4-multimodal-instruct" # Load model and processor processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="cuda", torch_dtype="auto", trust_remote_code=True, attn_implementation='flash_attention_2', ).cuda() # Load generation config generation_config = GenerationConfig.from_pretrained(model_path) # Define prompt structure user_prompt = '<|user|>' assistant_prompt = '<|assistant|>' prompt_suffix = '<|end|>' # Part 1: Image Processing print("\n--- IMAGE PROCESSING ---") image_url = 'https://www.ilankelman.org/stopsigns/australia.jpg' prompt = f'{user_prompt}<|image_1|>What is shown in this image?{prompt_suffix}{assistant_prompt}' print(f'>>> Prompt\n{prompt}') # Download and open image image = Image.open(requests.get(image_url, stream=True).raw) inputs = processor(text=prompt, images=image, return_tensors='pt').to('cuda:0') # Generate response generate_ids = model.generate( **inputs, max_new_tokens=1000, generation_config=generation_config, ) generate_ids = generate_ids[:, inputs['input_ids'].shape[1]:] response = processor.batch_decode( generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False )[0] print(f'>>> Response\n{response}') # Part 2: Audio Processing print("\n--- AUDIO PROCESSING ---") audio_url = "https://upload.wikimedia.org/wikipedia/commons/b/b0/Barbara_Sahakian_BBC_Radio4_The_Life_Scientific_29_May_2012_b01j5j24.flac" speech_prompt = "Transcribe the audio to text, and then translate the audio to French. Use <sep> as a separator between the original transcript and the translation." prompt = f'{user_prompt}<|audio_1|>{speech_prompt}{prompt_suffix}{assistant_prompt}' print(f'>>> Prompt\n{prompt}') # Downlowd and open audio file audio, samplerate = sf.read(io.BytesIO(urlopen(audio_url).read())) # Process with the model inputs = processor(text=prompt, audios=[(audio, samplerate)], return_tensors='pt').to('cuda:0') generate_ids = model.generate( **inputs, max_new_tokens=1000, generation_config=generation_config, ) generate_ids = generate_ids[:, inputs['input_ids'].shape[1]:] response = processor.batch_decode( generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False )[0] print(f'>>> Response\n{response}') ``` ## Responsible AI Considerations Like other language models, the Phi family of models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: The Phi models are trained primarily on English language content across text, speech, and visual inputs, with some additional multilingual coverage. Performance may vary significantly across different modalities and languages: + Text: Languages other than English will experience reduced performance, with varying levels of degradation across different non-English languages. English language varieties with less representation in the training data may perform worse than standard American English. + Speech: Speech recognition and processing shows similar language-based performance patterns, with optimal performance for standard American English accents and pronunciations. Other English accents, dialects, and non-English languages may experience lower recognition accuracy and response quality. Background noise, audio quality, and speaking speed can further impact performance. + Vision: Visual processing capabilities may be influenced by cultural and geographical biases in the training data. The model may show reduced performance when analyzing images containing text in non-English languages or visual elements more commonly found in non-Western contexts. Image quality, lighting conditions, and composition can also affect processing accuracy. + Multilingual performance and safety gaps: We believe it is important to make language models more widely available across different languages, but the Phi 4 models still exhibit challenges common across multilingual releases. As with any deployment of LLMs, developers will be better positioned to test for performance or safety gaps for their linguistic and cultural context and customize the model with additional fine-tuning and appropriate safeguards. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups, cultural contexts, or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: These models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Limited Scope for Code: The majority of Phi 4 training data is based in Python and uses common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, it is strongly recommended that users manually verify all API uses. + Long Conversation: Phi 4 models, like other models, can in some cases generate responses that are repetitive, unhelpful, or inconsistent in very long chat sessions in both English and non-English languages. Developers are encouraged to place appropriate mitigations, like limiting conversation turns to account for the possible conversational drift. + Inference of Sensitive Attributes: The Phi 4 models can sometimes attempt to infer sensitive attributes (such as personality characteristics, country of origin, gender, etc...) from the users’ voices when specifically asked to do so. Phi 4-multimodal-instruct is not designed or intended to be used as a biometric categorization system to categorize individuals based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation. This behavior can be easily and efficiently mitigated at the application level by a system message. Developers should apply responsible AI best practices, including mapping, measuring, and mitigating risks associated with their specific use case and cultural, linguistic context. Phi 4 family of models are general purpose models. As developers plan to deploy these models for specific use cases, they are encouraged to fine-tune the models for their use case and leverage the models as part of broader AI systems with language-specific safeguards in place. Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess the suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## Training ### Model + **Architecture:** Phi-4-multimodal-instruct has 5.6B parameters and is a multimodal transformer model. The model has the pretrained Phi-4-Mini-Instruct as the backbone language model, and the advanced encoders and adapters of vision and speech.<br> + **Inputs:** Text, image, and audio. It is best suited for prompts using the chat format.<br> + **Context length:** 128K tokens<br> + **GPUs:** 512 A100-80G<br> + **Training time:** 28 days<br> + **Training data:** 5T tokens, 2.3M speech hours, and 1.1T image-text tokens<br> + **Outputs:** Generated text in response to the input<br> + **Dates:** Trained between December 2024 and January 2025<br> + **Status:** This is a static model trained on offline datasets with the cutoff date of June 2024 for publicly available data.<br> + **Supported languages:** + Text: Arabic, Chinese, Czech, Danish, Dutch, English, Finnish, French, German, Hebrew, Hungarian, Italian, Japanese, Korean, Norwegian, Polish, Portuguese, Russian, Spanish, Swedish, Thai, Turkish, Ukrainian<br> + Vision: English<br> + Audio: English, Chinese, German, French, Italian, Japanese, Spanish, Portuguese<br> + **Release date:** February 2025<br> ### Training Datasets Phi-4-multimodal-instruct's training data includes a wide variety of sources, totaling 5 trillion text tokens, and is a combination of 1) publicly available documents filtered for quality, selected high-quality educational data, and code 2) newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (e.g., science, daily activities, theory of mind, etc.) 3) high quality human labeled data in chat format 4) selected high-quality image-text interleave data 5) synthetic and publicly available image, multi-image, and video data 6) anonymized in-house speech-text pair data with strong/weak transcriptions 7) selected high-quality publicly available and anonymized in-house speech data with task-specific supervisions 8) selected synthetic speech data 9) synthetic vision-speech data. Focus was placed on the quality of data that could potentially improve the reasoning ability for the model, and the publicly available documents were filtered to contain a preferred level of knowledge. As an example, the result of a game in premier league on a particular day might be good training data for large foundation models, but such information was removed for the Phi-4-multimodal-instruct to leave more model capacity for reasoning for the model's small size. The data collection process involved sourcing information from publicly available documents, with a focus on filtering out undesirable documents and images. To safeguard privacy, image and text data sources were filtered to remove or scrub potentially personal data from the training data. The decontamination process involved normalizing and tokenizing the dataset, then generating and comparing n-grams between the target dataset and benchmark datasets. Samples with matching n-grams above a threshold were flagged as contaminated and removed from the dataset. A detailed contamination report was generated, summarizing the matched text, matching ratio, and filtered results for further analysis. ### Fine-tuning A basic example of supervised fine-tuning (SFT) for [speech](https://huggingface.co/microsoft/Phi-4-multimodal-instruct/resolve/main/sample_finetune_speech.py) and [vision](https://huggingface.co/microsoft/Phi-4-multimodal-instruct/resolve/main/sample_finetune_vision.py) is provided respectively. ## Safety The Phi-4 family of models has adopted a robust safety post-training approach. This approach leverages a variety of both open-source and in-house generated datasets. The overall technique employed for safety alignment is a combination of SFT (Supervised Fine-Tuning), DPO (Direct Preference Optimization), and RLHF (Reinforcement Learning from Human Feedback) approaches by utilizing human-labeled and synthetic English-language datasets, including publicly available datasets focusing on helpfulness and harmlessness, as well as various questions and answers targeted to multiple safety categories. For non-English languages, existing datasets were extended via machine translation. Speech Safety datasets were generated by running Text Safety datasets through Azure TTS (Text-To-Speech) Service, for both English and non-English languages. Vision (text & images) Safety datasets were created to cover harm categories identified both in public and internal multi-modal RAI datasets. ### Safety Evaluation and Red-Teaming Various evaluation techniques including red teaming, adversarial conversation simulations, and multilingual safety evaluation benchmark datasets were leveraged to evaluate Phi-4 models' propensity to produce undesirable outputs across multiple languages and risk categories. Several approaches were used to compensate for the limitations of one approach alone. Findings across the various evaluation methods indicate that safety post-training that was done as detailed in the [Phi 3 Safety Post-Training paper](https://arxiv.org/abs/2407.13833) had a positive impact across multiple languages and risk categories as observed by refusal rates (refusal to output undesirable outputs) and robustness to jailbreak techniques. Details on prior red team evaluations across Phi models can be found in the [Phi 3 Safety Post-Training paper](https://arxiv.org/abs/2407.13833). For this release, the red teaming effort focused on the newest Audio input modality and on the following safety areas: harmful content, self-injury risks, and exploits. The model was found to be more susceptible to providing undesirable outputs when attacked with context manipulation or persuasive techniques. These findings applied to all languages, with the persuasive techniques mostly affecting French and Italian. This highlights the need for industry-wide investment in the development of high-quality safety evaluation datasets across multiple languages, including low resource languages, and risk areas that account for cultural nuances where those languages are spoken. ### Vision Safety Evaluation To assess model safety in scenarios involving both text and images, Microsoft's Azure AI Evaluation SDK was utilized. This tool facilitates the simulation of single-turn conversations with the target model by providing prompt text and images designed to incite harmful responses. The target model's responses are subsequently evaluated by a capable model across multiple harm categories, including violence, sexual content, self-harm, hateful and unfair content, with each response scored based on the severity of the harm identified. The evaluation results were compared with those of Phi-3.5-Vision and open-source models of comparable size. In addition, we ran both an internal and the public RTVLM and VLGuard multi-modal (text & vision) RAI benchmarks, once again comparing scores with Phi-3.5-Vision and open-source models of comparable size. However, the model may be susceptible to language-specific attack prompts and cultural context. ### Audio Safety Evaluation In addition to extensive red teaming, the Safety of the model was assessed through three distinct evaluations. First, as performed with Text and Vision inputs, Microsoft's Azure AI Evaluation SDK was leveraged to detect the presence of harmful content in the model's responses to Speech prompts. Second, [Microsoft's Speech Fairness evaluation](https://speech.microsoft.com/portal/responsibleai/assess) was run to verify that Speech-To-Text transcription worked well across a variety of demographics. Third, we proposed and evaluated a mitigation approach via a system message to help prevent the model from inferring sensitive attributes (such as gender, sexual orientation, profession, medical condition, etc...) from the voice of a user. ## Software * [PyTorch](https://github.com/pytorch/pytorch) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) * [Accelerate](https://huggingface.co/docs/transformers/main/en/accelerate) * [soundfile](https://github.com/bastibe/python-soundfile) * [pillow](https://github.com/python-pillow/Pillow) ## Hardware Note that by default, the Phi-4-multimodal-instruct model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: * NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager" ## License The model is licensed under the [MIT license](./LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies. ## Appendix A: Benchmark Methodology We include a brief word on methodology here - and in particular, how we think about optimizing prompts. In an ideal world, we would never change any prompts in our benchmarks to ensure it is always an apples-to-apples comparison when comparing different models. Indeed, this is our default approach, and is the case in the vast majority of models we have run to date. There are, however, some exceptions to this. In some cases, we see a model that performs worse than expected on a given eval due to a failure to respect the output format. For example: + A model may refuse to answer questions (for no apparent reason), or in coding tasks models may prefix their response with “Sure, I can help with that. …” which may break the parser. In such cases, we have opted to try different system messages (e.g. “You must always respond to a question” or “Get to the point!”). + Some models, we observed that few shots actually hurt model performance. In this case we did allow running the benchmarks with 0-shots for all cases. + We have tools to convert between chat and completions APIs. When converting a chat prompt to a completion prompt, some models have different keywords e.g. Human vs User. In these cases, we do allow for model-specific mappings for chat to completion prompts. However, we do not: + Pick different few-shot examples. Few shots will always be the same when comparing different models. + Change prompt format: e.g. if it is an A/B/C/D multiple choice, we do not tweak this to 1/2/3/4 multiple choice. ### Vision Benchmark Settings The goal of the benchmark setup is to measure the performance of the LMM when a regular user utilizes these models for a task involving visual input. To this end, we selected 9 popular and publicly available single-frame datasets and 3 multi-frame benchmarks that cover a wide range of challenging topics and tasks (e.g., mathematics, OCR tasks, charts-and-plots understanding, etc.) as well as a set of high-quality models. Our benchmarking setup utilizes zero-shot prompts and all the prompt content are the same for every model. We only formatted the prompt content to satisfy the model's prompt API. This ensures that our evaluation is fair across the set of models we tested. Many benchmarks necessitate models to choose their responses from a presented list of options. Therefore, we've included a directive in the prompt's conclusion, guiding all models to pick the option letter that corresponds to the answer they deem correct. In terms of the visual input, we use the images from the benchmarks as they come from the original datasets. We converted these images to base-64 using a JPEG encoding for models that require this format (e.g., GPTV, Claude Sonnet 3.5, Gemini 1.5 Pro/Flash). For other models (e.g., Llava Interleave, and InternVL2 4B and 8B), we used their Huggingface interface and passed in PIL images or a JPEG image stored locally. We did not scale or pre-process images in any other way. Lastly, we used the same code to extract answers and evaluate them using the same code for every considered model. This ensures that we are fair in assessing the quality of their answers. ### Speech Benchmark Settings The objective of this benchmarking setup is to assess the performance of models in speech and audio understanding tasks as utilized by regular users. To accomplish this, we selected several state-of-the-art open-sourced and closed-sourced models and performed evaluations across a variety of public and in-house benchmarks. These benchmarks encompass diverse and challenging topics, including Automatic Speech Recognition (ASR), Automatic Speech Translation (AST), Spoken Query Question Answering (SQQA), Audio Understanding (AU), and Speech Summarization. The results are derived from evaluations conducted on identical test data without any further clarifications. All results were obtained without sampling during inference. For an accurate comparison, we employed consistent prompts for models across different tasks, except for certain model APIs (e.g., GPT-4o), which may refuse to respond to specific prompts for some tasks. In conclusion, we used uniform code to extract answers and evaluate them for all considered models. This approach ensured fairness by assessing the quality of their responses. ### Benchmark datasets The model was evaluated across a breadth of public and internal benchmarks to understand it's capabilities under multiple tasks and conditions. While most evaluations use English, multilingual benchmark was incorporated to cover performance in select languages. More specifically, + Vision: + Popular aggregated benchmark: + MMMU and MMMU-Pro: massive multi-discipline tasks at college-level subject knowledge and deliberate reasoning. + MMBench: large-scale benchmark to evaluate perception and reasoning capabilities. + Visual reasoning: + ScienceQA: multimodal visual question answering on science. + MathVista: visual math reasoning. + InterGPS: Visual 2D geometry reasoning. + Chart reasoning: + ChartQA: visual and logical reasoning on charts. + AI2D: diagram understanding. + Document Intelligence: + TextVQA: read and reason about text in images to answer questions about them. + InfoVQA: read and reason about high-resolution infographics images with arbitrary aspect ratios. + DocVQA: read and reason about document images with dense texts and handwritten texts. + OCRBench: test OCR and QA capability on diverse text related images. + Vision speech multimodal understanding: + s_AI2D: diagram understanding with speech as the question format. + s_ChartQA: visual and logical reasoning on charts with speech as the question format. + s_InfoVQA: read and reason about high-resolution infographics images with speech as the question format. + s_DocVQA: read and reason about document images with dense texts and handwritten texts with speech as the question format. + RAI & Security Benchmarks: + VLGuardExt: VLGuard is a vision-language instruction following public dataset for model safety to address safety on deception discrimination, privacy and risky behavior (advice, sexual, violence, political). This was extended to a few internal categories such as child safety and election critical information. + RTVLM: Public benchmark for red-teaming vision-language model on model truthfulness, privacy, safety, and fairness. + GPTV-RAI: In-house benchmark for GPT-4V released from Azure AI, measuring harmfulness (ex. sexual, violent, hate and self-harm), privacy, jailbreak, misinformation. + Speech: + CommonVoice v15 is an open-source, multilingual speech dataset developed by Mozilla. It includes over 33,000 hours of speech data in 133 languages, contributed and validated by volunteers worldwide.The evaluations were conducted in the eight supported languages. + The OpenASR Leaderboard on Hugging Face is designed for benchmarking and evaluating the robustness of ASR models on English. The datasets in the leaderboard cover diverse speech domains including reading speech, conversations, meetings, and so on. + CoVoST2 is a multilingual speech-to-text translation dataset derived from Mozilla's Common Voice project. It is one of the largest open datasets available for speech translation, providing support for both X-to-English (X→En) and English-to-X (En→X) translation tasks. The directions with supported languages were evaluated on the test sets. + FLEURS is a multilingual speech dataset designed for evaluating speech recognition and speech-to-text translation models across a wide range of languages. The test sets for speech recognition and translation tasks were evaluated with the eight supported languages. + MT Bench (Multi-turn Benchmark) is specifically designed to evaluate the conversational and instruction-following abilities of AI models in multi-turn question-answering (QA) scenarios. To support spoken questions, the text is synthesized into speech. + MMMLU (Multilingual Massive Multitask Language Understanding) is an extensive benchmark designed to evaluate the general knowledge and reasoning capabilities of AI models across a wide array of subjects. To support spoken questions, the text is synthesized into its speech counterpart. The model was evaluated on the eight supported languages for this test set. + AIR-Bench Chat (Audio Instruction and Response Benchmark) is a comprehensive evaluation framework designed to test the capabilities of large audio language models (LALMs). It includes both foundation and chat benchmarks. The chat benchmark is selected for its open-ended question answering for audio capability. + MMAU (Massive Multi-Task Audio Understanding) is a comprehensive dataset designed to evaluate the capabilities of multi-modal models in audio-based understanding and reasoning tasks. The test sets are in the form of multiple-choices QA, covering the categories of music, sound, and speech. + Golden3 is a real-world meeting dataset, containing 108 meeting recordings with corresponding transcripts, averaging 6 minutes each. It is recorded across 30 conference rooms, featuring 4-8 attendees. The dataset is primarily in English, covering a wide range of topics. GPT4 is employed to generate summarization instructions that ask to summarize partial or the entire conversation or control the output style/length/structure. + AMI (Augmented Multi-Party Interaction) is a comprehensive collection of meeting recordings, encompassing approximately 100 hours of data. The test split contains 20 meeting recordings with an average duration of 32 minutes. The model was tested on the close-talking version of audio. GPT4 is employed to generate summarization instructions that ask to summarize partial or the entire conversation or control the output style/length/structure. + Safety and RAI: + Single-turn trustworthiness evaluation: + DecodingTrust: DecodingTrust is a collection of trustworthiness benchmarks in eight different perspectives + XSTest: XSTest is an exaggerated safety evaluation + Toxigen: Toxigen is adversarial and hate speech detection + Red Team: + Responses to prompts provided by AI Red Team at Microsoft
null
Non_BioNLP
## Model Summary Phi-4-multimodal-instruct is a lightweight open multimodal foundation model that leverages the language, vision, and speech research and datasets used for Phi-3.5 and 4.0 models. The model processes text, image, and audio inputs, generating text outputs, and comes with 128K token context length. The model underwent an enhancement process, incorporating both supervised fine-tuning, direct preference optimization and RLHF (Reinforcement Learning from Human Feedback) to support precise instruction adherence and safety measures. The languages that each modal supports are the following: - Text: Arabic, Chinese, Czech, Danish, Dutch, English, Finnish, French, German, Hebrew, Hungarian, Italian, Japanese, Korean, Norwegian, Polish, Portuguese, Russian, Spanish, Swedish, Thai, Turkish, Ukrainian - Vision: English - Audio: English, Chinese, German, French, Italian, Japanese, Spanish, Portuguese 📰 [Phi-4-multimodal Microsoft Blog](https://aka.ms/phi4-feb2025) <br> 📖 [Phi-4-multimodal Technical Report](https://aka.ms/phi-4-multimodal/techreport) <br> 🏡 [Phi Portal](https://aka.ms/phi-4-multimodal/azure) <br> 👩‍🍳 [Phi Cookbook](https://github.com/microsoft/PhiCookBook) <br> 🖥️ Try It on [Azure](https://aka.ms/phi-4-multimodal/azure), [Nvidia Playgroud](https://aka.ms/phi-4-multimodal/nvidia) <br> 📱Huggingface Spaces [Thoughts Organizer](https://huggingface.co/spaces/microsoft/ThoughtsOrganizer), [Stories Come Alive](https://huggingface.co/spaces/microsoft/StoriesComeAlive), [Phine Speech Translator](https://huggingface.co/spaces/microsoft/PhineSpeechTranslator) <br> **Phi-4**: [[multimodal-instruct](https://huggingface.co/microsoft/Phi-4-multimodal-instruct) | [onnx](https://huggingface.co/microsoft/Phi-4-multimodal-instruct-onnx)]; [mini-instruct](https://huggingface.co/microsoft/Phi-4-mini-instruct); Watch as Phi-4 Multimodal analyzes spoken language to help plan a trip to Seattle, demonstrating its advanced audio processing and recommendation capabilities. <div style="width: 800px; height: 400px; margin: 0 auto;"> <video autoplay muted loop controls playsinline style="width: 100%; height: 100%; object-fit: contain;"> <source src="https://phi4releasestorage.blob.core.windows.net/demo/Phi-4-multimodal_SeattleTrip.mp4" type="video/mp4"> Your browser does not support the video tag. </video> </div> See how Phi-4 Multimodal tackles complex mathematical problems through visual inputs, demonstrating its ability to process and solve equations presented in images. <div style="width: 800px; height: 400px; margin: 0 auto;"> <video autoplay muted loop controls playsinline style="width: 100%; height: 100%; object-fit: contain;"> <source src="https://phi4releasestorage.blob.core.windows.net/demo/Phi-4-multimodal_Math.mp4" type="video/mp4"> Your browser does not support the video tag. </video> </div> Explore how Phi-4 Mini functions as an intelligent agent, showcasing its reasoning and task execution abilities in complex scenarios. <div style="width: 800px; height: 400px; margin: 0 auto;"> <video autoplay muted loop controls playsinline style="width: 100%; height: 100%; object-fit: contain;"> <source src="https://phi4releasestorage.blob.core.windows.net/demo/Phi-4-mini_Agents.mp4" type="video/mp4"> Your browser does not support the video tag. </video> </div> ## Intended Uses ### Primary Use Cases The model is intended for broad multilingual and multimodal commercial and research use . The model provides uses for general purpose AI systems and applications which require 1) Memory/compute constrained environments 2) Latency bound scenarios 3) Strong reasoning (especially math and logic) 4) Function and tool calling 5) General image understanding 6) Optical character recognition 7) Chart and table understanding 8) Multiple image comparison 9) Multi-image or video clip summarization 10) Speech recognition 11) Speech translation 12) Speech QA 13) Speech summarization 14) Audio understanding The model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. ### Use Case Considerations The model is not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models and multimodal models, as well as performance difference across languages, as they select use cases, and evaluate and mitigate for accuracy, safety, and fairness before using within a specific downstream use case, particularly for high-risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including but not limited to privacy, trade compliance laws, etc.) that are relevant to their use case. ***Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.*** ## Release Notes This release of Phi-4-multimodal-instruct is based on valuable user feedback from the Phi-3 series. Previously, users could use a speech recognition model to talk to the Mini and Vision models. To achieve this, users needed to use a pipeline of two models: one model to transcribe the audio to text, and another model for the language or vision tasks. This pipeline means that the core model was not provided the full breadth of input information – e.g. cannot directly observe multiple speakers, background noises, jointly align speech, vision, language information at the same time on the same representation space. With Phi-4-multimodal-instruct, a single new open model has been trained across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network. The model employed new architecture, larger vocabulary for efficiency, multilingual, and multimodal support, and better post-training techniques were used for instruction following and function calling, as well as additional data leading to substantial gains on key multimodal capabilities. It is anticipated that Phi-4-multimodal-instruct will greatly benefit app developers and various use cases. The enthusiastic support for the Phi-4 series is greatly appreciated. Feedback on Phi-4 is welcomed and crucial to the model's evolution and improvement. Thank you for being part of this journey! ## Model Quality To understand the capabilities, Phi-4-multimodal-instruct was compared with a set of models over a variety of benchmarks using an internal benchmark platform (See Appendix A for benchmark methodology). Users can refer to the Phi-4-Mini-Instruct model card for details of language benchmarks. At the high-level overview of the model quality on representative speech and vision benchmarks: ### Speech The Phi-4-multimodal-instruct was observed as - Having strong automatic speech recognition (ASR) and speech translation (ST) performance, surpassing expert ASR model WhisperV3 and ST models SeamlessM4T-v2-Large. - Ranking number 1 on the Huggingface OpenASR leaderboard with word error rate 6.14% in comparison with the current best model 6.5% as of Jan 17, 2025. - Being the first open-sourced model that can perform speech summarization, and the performance is close to GPT4o. - Having a gap with close models, e.g. Gemini-1.5-Flash and GPT-4o-realtime-preview, on speech QA task. Work is being undertaken to improve this capability in the next iterations. #### Speech Recognition (lower is better) The performance of Phi-4-multimodal-instruct on the aggregated benchmark datasets: ![alt text](./figures/speech_recognition.png) The performance of Phi-4-multimodal-instruct on different languages, averaging the WERs of CommonVoice and FLEURS: ![alt text](./figures/speech_recog_by_lang.png) #### Speech Translation (higher is better) Translating from German, Spanish, French, Italian, Japanese, Portugues, Chinese to English: ![alt text](./figures/speech_translate.png) Translating from English to German, Spanish, French, Italian, Japanese, Portugues, Chinese. Noted that WhiperV3 does not support this capability: ![alt text](./figures/speech_translate_2.png) #### Speech Summarization (higher is better) ![alt text](./figures/speech_summarization.png) #### Speech QA MT bench scores are scaled by 10x to match the score range of MMMLU: ![alt text](./figures/speech_qa.png) #### Audio Understanding AIR bench scores are scaled by 10x to match the score range of MMAU: ![alt text](./figures/audio_understand.png) ### Vision #### Vision-Speech tasks Phi-4-multimodal-instruct is capable of processing both image and audio together, the following table shows the model quality when the input query for vision content is synthetic speech on chart/table understanding and document reasoning tasks. Compared to other existing state-of-the-art omni models that can enable audio and visual signal as input, Phi-4-multimodal-instruct achieves much stronger performance on multiple benchmarks. | Benchmarks | Phi-4-multimodal-instruct | InternOmni-7B | Gemini-2.0-Flash-Lite-prv-02-05 | Gemini-2.0-Flash | Gemini-1.5-Pro | |-----------------------|--------------------------|---------------|--------------------------------|-----------------|----------------| | s_AI2D | **68.9** | 53.9 | 62.0 | **69.4** | 67.7 | | s_ChartQA | **69.0** | 56.1 | 35.5 | 51.3 | 46.9 | | s_DocVQA | **87.3** | 79.9 | 76.0 | 80.3 | 78.2 | | s_InfoVQA | **63.7** | 60.3 | 59.4 | 63.6 | **66.1** | | **Average** | **72.2** | **62.6** | **58.2** | **66.2** | **64.7** | ### Vision tasks To understand the vision capabilities, Phi-4-multimodal-instruct was compared with a set of models over a variety of zero-shot benchmarks using an internal benchmark platform. At the high-level overview of the model quality on representative benchmarks: | Dataset | Phi-4-multimodal-ins | Phi-3.5-vision-ins | Qwen 2.5-VL-3B-ins | Intern VL 2.5-4B | Qwen 2.5-VL-7B-ins | Intern VL 2.5-8B | Gemini 2.0-Flash Lite-preview-0205 | Gemini2.0-Flash | Claude-3.5-Sonnet-2024-10-22 | Gpt-4o-2024-11-20 | |----------------------------------|---------------------|-------------------|-------------------|-----------------|-------------------|-----------------|--------------------------------|-----------------|----------------------------|------------------| | **Popular aggregated benchmark** | | | | | | | | | | | | MMMU | **55.1** | 43.0 | 47.0 | 48.3 | 51.8 | 50.6 | 54.1 | **64.7** | 55.8 | 61.7 | | MMBench (dev-en) | **86.7** | 81.9 | 84.3 | 86.8 | 87.8 | 88.2 | 85.0 | **90.0** | 86.7 | 89.0 | | MMMU-Pro (std/vision) | **38.5** | 21.8 | 29.9 | 32.4 | 36.9 | 34.4 | 45.1 | **54.4** | 54.3 | 53.0 | | **Visual science reasoning** | | | | | | | | | | | | ScienceQA Visual (img-test) | **97.5** | 91.3 | 79.4 | 96.2 | 87.7 | **97.3** | 85.0 | 88.3 | 81.2 | 88.2 | | **Visual math reasoning** | | | | | | | | | | | | MathVista (testmini) | **62.4** | 43.9 | 60.8 | 51.2 | **67.8** | 56.7 | 57.6 | 47.2 | 56.9 | 56.1 | | InterGPS | **48.6** | 36.3 | 48.3 | 53.7 | 52.7 | 54.1 | 57.9 | **65.4** | 47.1 | 49.1 | | **Chart & table reasoning** | | | | | | | | | | | | AI2D | **82.3** | 78.1 | 78.4 | 80.0 | 82.6 | 83.0 | 77.6 | 82.1 | 70.6 | **83.8** | | ChartQA | **81.4** | 81.8 | 80.0 | 79.1 | **85.0** | 81.0 | 73.0 | 79.0 | 78.4 | 75.1 | | DocVQA | **93.2** | 69.3 | 93.9 | 91.6 | **95.7** | 93.0 | 91.2 | 92.1 | 95.2 | 90.9 | | InfoVQA | **72.7** | 36.6 | 77.1 | 72.1 | **82.6** | 77.6 | 73.0 | 77.8 | 74.3 | 71.9 | | **Document Intelligence** | | | | | | | | | | | | TextVQA (val) | **75.6** | 72.0 | 76.8 | 70.9 | **77.7** | 74.8 | 72.9 | 74.4 | 58.6 | 73.1 | | OCR Bench | **84.4** | 63.8 | 82.2 | 71.6 | **87.7** | 74.8 | 75.7 | 81.0 | 77.0 | 77.7 | | **Object visual presence verification** | | | | | | | | | | | | POPE | **85.6** | 86.1 | 87.9 | 89.4 | 87.5 | **89.1** | 87.5 | 88.0 | 82.6 | 86.5 | | **Multi-image perception** | | | | | | | | | | | | BLINK | **61.3** | 57.0 | 48.1 | 51.2 | 55.3 | 52.5 | 59.3 | **64.0** | 56.9 | 62.4 | | Video MME 16 frames | **55.0** | 50.8 | 56.5 | 57.3 | 58.2 | 58.7 | 58.8 | 65.5 | 60.2 | **68.2** | | **Average** | **72.0** | **60.9** | **68.7** | **68.8** | **73.1** | **71.1** | **70.2** | **74.3** | **69.1** | **72.4** | ![alt text](./figures/vision_radar.png) #### Visual Perception Below are the comparison results on existing multi-image tasks. On average, Phi-4-multimodal-instruct outperforms competitor models of the same size and competitive with much bigger models on multi-frame capabilities. BLINK is an aggregated benchmark with 14 visual tasks that humans can solve very quickly but are still hard for current multimodal LLMs. | Dataset | Phi-4-multimodal-instruct | Qwen2.5-VL-3B-Instruct | InternVL 2.5-4B | Qwen2.5-VL-7B-Instruct | InternVL 2.5-8B | Gemini-2.0-Flash-Lite-prv-02-05 | Gemini-2.0-Flash | Claude-3.5-Sonnet-2024-10-22 | Gpt-4o-2024-11-20 | |----------------------------|--------------------------|----------------------|-----------------|----------------------|-----------------|--------------------------------|-----------------|----------------------------|------------------| | Art Style | **86.3** | 58.1 | 59.8 | 65.0 | 65.0 | 76.9 | 76.9 | 68.4 | 73.5 | | Counting | **60.0** | 67.5 | 60.0 | 66.7 | **71.7** | 45.8 | 69.2 | 60.8 | 65.0 | | Forensic Detection | **90.2** | 34.8 | 22.0 | 43.9 | 37.9 | 31.8 | 74.2 | 63.6 | 71.2 | | Functional Correspondence | **30.0** | 20.0 | 26.9 | 22.3 | 27.7 | 48.5 | **53.1** | 34.6 | 42.3 | | IQ Test | **22.7** | 25.3 | 28.7 | 28.7 | 28.7 | 28.0 | **30.7** | 20.7 | 25.3 | | Jigsaw | **68.7** | 52.0 | **71.3** | 69.3 | 53.3 | 62.7 | 69.3 | 61.3 | 68.7 | | Multi-View Reasoning | **76.7** | 44.4 | 44.4 | 54.1 | 45.1 | 55.6 | 41.4 | 54.9 | 54.1 | | Object Localization | **52.5** | 55.7 | 53.3 | 55.7 | 58.2 | 63.9 | **67.2** | 58.2 | 65.6 | | Relative Depth | **69.4** | 68.5 | 68.5 | 80.6 | 76.6 | **81.5** | 72.6 | 66.1 | 73.4 | | Relative Reflectance | **26.9** | **38.8** | **38.8** | 32.8 | **38.8** | 33.6 | 34.3 | 38.1 | 38.1 | | Semantic Correspondence | **52.5** | 32.4 | 33.8 | 28.8 | 24.5 | **56.1** | 55.4 | 43.9 | 47.5 | | Spatial Relation | **72.7** | 80.4 | 86.0 | **88.8** | 86.7 | 74.1 | 79.0 | 74.8 | 83.2 | | Visual Correspondence | **67.4** | 28.5 | 39.5 | 50.0 | 44.2 | 84.9 | **91.3** | 72.7 | 82.6 | | Visual Similarity | **86.7** | 67.4 | 88.1 | 87.4 | 85.2 | **87.4** | 80.7 | 79.3 | 83.0 | | **Overall** | **61.6** | **48.1** | **51.2** | **55.3** | **52.5** | **59.3** | **64.0** | **56.9** | **62.4** | ![alt text](./figures/multi_image.png) ## Usage ### Requirements Phi-4 family has been integrated in the `4.48.2` version of `transformers`. The current `transformers` version can be verified with: `pip list | grep transformers`. Examples of required packages: ``` flash_attn==2.7.4.post1 torch==2.6.0 transformers==4.48.2 accelerate==1.3.0 soundfile==0.13.1 pillow==11.1.0 scipy==1.15.2 torchvision==0.21.0 backoff==2.2.1 peft==0.13.2 ``` Phi-4-multimodal-instruct is also available in [Azure AI Studio](https://aka.ms/phi-4-multimodal/azure) ### Tokenizer Phi-4-multimodal-instruct supports a vocabulary size of up to `200064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-4-multimodal-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size. ### Input Formats Given the nature of the training data, the Phi-4-multimodal-instruct model is best suited for prompts using the chat format as follows: #### Text chat format This format is used for general conversation and instructions: ` <|system|>You are a helpful assistant.<|end|><|user|>How to explain Internet for a medieval knight?<|end|><|assistant|> ` #### Tool-enabled function-calling format This format is used when the user wants the model to provide function calls based on the given tools. The user should provide the available tools in the system prompt, wrapped by <|tool|> and <|/tool|> tokens. The tools should be specified in JSON format, using a JSON dump structure. Example: ` <|system|>You are a helpful assistant with some tools.<|tool|>[{"name": "get_weather_updates", "description": "Fetches weather updates for a given city using the RapidAPI Weather API.", "parameters": {"city": {"description": "The name of the city for which to retrieve weather information.", "type": "str", "default": "London"}}}]<|/tool|><|end|><|user|>What is the weather like in Paris today?<|end|><|assistant|> ` #### Vision-Language Format This format is used for conversation with image: ` <|user|><|image_1|>Describe the image in detail.<|end|><|assistant|> ` For multiple images, the user needs to insert multiple image placeholders in the prompt as below: ` <|user|><|image_1|><|image_2|><|image_3|>Summarize the content of the images.<|end|><|assistant|> ` #### Speech-Language Format This format is used for various speech and audio tasks: ` <|user|><|audio_1|>{task prompt}<|end|><|assistant|> ` The task prompt can vary for different task. Automatic Speech Recognition: ` <|user|><|audio_1|>Transcribe the audio clip into text.<|end|><|assistant|> ` Automatic Speech Translation: ` <|user|><|audio_1|>Translate the audio to {lang}.<|end|><|assistant|> ` Automatic Speech Translation with chain-of-thoughts: ` <|user|><|audio_1|>Transcribe the audio to text, and then translate the audio to {lang}. Use <sep> as a separator between the original transcript and the translation.<|end|><|assistant|> ` Spoken-query Question Answering: ` <|user|><|audio_1|><|end|><|assistant|> ` #### Vision-Speech Format This format is used for conversation with image and audio. The audio may contain query related to the image: ` <|user|><|image_1|><|audio_1|><|end|><|assistant|> ` For multiple images, the user needs to insert multiple image placeholders in the prompt as below: ` <|user|><|image_1|><|image_2|><|image_3|><|audio_1|><|end|><|assistant|> ` **Vision** - Any common RGB/gray image format (e.g., (".jpg", ".jpeg", ".png", ".ppm", ".bmp", ".pgm", ".tif", ".tiff", ".webp")) can be supported. - Resolution depends on the GPU memory size. Higher resolution and more images will produce more tokens, thus using more GPU memory. During training, 64 crops can be supported. If it is a square image, the resolution would be around (8*448 by 8*448). For multiple-images, at most 64 frames can be supported, but with more frames as input, the resolution of each frame needs to be reduced to fit in the memory. **Audio** - Any audio format that can be loaded by soundfile package should be supported. - To keep the satisfactory performance, maximum audio length is suggested to be 40s. For summarization tasks, the maximum audio length is suggested to 30 mins. ### Loading the model locally After obtaining the Phi-4-multimodal-instruct model checkpoints, users can use this sample code for inference. ```python import requests import torch import os import io from PIL import Image import soundfile as sf from transformers import AutoModelForCausalLM, AutoProcessor, GenerationConfig from urllib.request import urlopen # Define model path model_path = "microsoft/Phi-4-multimodal-instruct" # Load model and processor processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="cuda", torch_dtype="auto", trust_remote_code=True, attn_implementation='flash_attention_2', ).cuda() # Load generation config generation_config = GenerationConfig.from_pretrained(model_path) # Define prompt structure user_prompt = '<|user|>' assistant_prompt = '<|assistant|>' prompt_suffix = '<|end|>' # Part 1: Image Processing print("\n--- IMAGE PROCESSING ---") image_url = 'https://www.ilankelman.org/stopsigns/australia.jpg' prompt = f'{user_prompt}<|image_1|>What is shown in this image?{prompt_suffix}{assistant_prompt}' print(f'>>> Prompt\n{prompt}') # Download and open image image = Image.open(requests.get(image_url, stream=True).raw) inputs = processor(text=prompt, images=image, return_tensors='pt').to('cuda:0') # Generate response generate_ids = model.generate( **inputs, max_new_tokens=1000, generation_config=generation_config, ) generate_ids = generate_ids[:, inputs['input_ids'].shape[1]:] response = processor.batch_decode( generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False )[0] print(f'>>> Response\n{response}') # Part 2: Audio Processing print("\n--- AUDIO PROCESSING ---") audio_url = "https://upload.wikimedia.org/wikipedia/commons/b/b0/Barbara_Sahakian_BBC_Radio4_The_Life_Scientific_29_May_2012_b01j5j24.flac" speech_prompt = "Transcribe the audio to text, and then translate the audio to French. Use <sep> as a separator between the original transcript and the translation." prompt = f'{user_prompt}<|audio_1|>{speech_prompt}{prompt_suffix}{assistant_prompt}' print(f'>>> Prompt\n{prompt}') # Downlowd and open audio file audio, samplerate = sf.read(io.BytesIO(urlopen(audio_url).read())) # Process with the model inputs = processor(text=prompt, audios=[(audio, samplerate)], return_tensors='pt').to('cuda:0') generate_ids = model.generate( **inputs, max_new_tokens=1000, generation_config=generation_config, ) generate_ids = generate_ids[:, inputs['input_ids'].shape[1]:] response = processor.batch_decode( generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False )[0] print(f'>>> Response\n{response}') ``` ## Responsible AI Considerations Like other language models, the Phi family of models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: The Phi models are trained primarily on English language content across text, speech, and visual inputs, with some additional multilingual coverage. Performance may vary significantly across different modalities and languages: + Text: Languages other than English will experience reduced performance, with varying levels of degradation across different non-English languages. English language varieties with less representation in the training data may perform worse than standard American English. + Speech: Speech recognition and processing shows similar language-based performance patterns, with optimal performance for standard American English accents and pronunciations. Other English accents, dialects, and non-English languages may experience lower recognition accuracy and response quality. Background noise, audio quality, and speaking speed can further impact performance. + Vision: Visual processing capabilities may be influenced by cultural and geographical biases in the training data. The model may show reduced performance when analyzing images containing text in non-English languages or visual elements more commonly found in non-Western contexts. Image quality, lighting conditions, and composition can also affect processing accuracy. + Multilingual performance and safety gaps: We believe it is important to make language models more widely available across different languages, but the Phi 4 models still exhibit challenges common across multilingual releases. As with any deployment of LLMs, developers will be better positioned to test for performance or safety gaps for their linguistic and cultural context and customize the model with additional fine-tuning and appropriate safeguards. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups, cultural contexts, or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: These models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Limited Scope for Code: The majority of Phi 4 training data is based in Python and uses common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, it is strongly recommended that users manually verify all API uses. + Long Conversation: Phi 4 models, like other models, can in some cases generate responses that are repetitive, unhelpful, or inconsistent in very long chat sessions in both English and non-English languages. Developers are encouraged to place appropriate mitigations, like limiting conversation turns to account for the possible conversational drift. + Inference of Sensitive Attributes: The Phi 4 models can sometimes attempt to infer sensitive attributes (such as personality characteristics, country of origin, gender, etc...) from the users’ voices when specifically asked to do so. Phi 4-multimodal-instruct is not designed or intended to be used as a biometric categorization system to categorize individuals based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation. This behavior can be easily and efficiently mitigated at the application level by a system message. Developers should apply responsible AI best practices, including mapping, measuring, and mitigating risks associated with their specific use case and cultural, linguistic context. Phi 4 family of models are general purpose models. As developers plan to deploy these models for specific use cases, they are encouraged to fine-tune the models for their use case and leverage the models as part of broader AI systems with language-specific safeguards in place. Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess the suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## Training ### Model + **Architecture:** Phi-4-multimodal-instruct has 5.6B parameters and is a multimodal transformer model. The model has the pretrained Phi-4-Mini-Instruct as the backbone language model, and the advanced encoders and adapters of vision and speech.<br> + **Inputs:** Text, image, and audio. It is best suited for prompts using the chat format.<br> + **Context length:** 128K tokens<br> + **GPUs:** 512 A100-80G<br> + **Training time:** 28 days<br> + **Training data:** 5T tokens, 2.3M speech hours, and 1.1T image-text tokens<br> + **Outputs:** Generated text in response to the input<br> + **Dates:** Trained between December 2024 and January 2025<br> + **Status:** This is a static model trained on offline datasets with the cutoff date of June 2024 for publicly available data.<br> + **Supported languages:** + Text: Arabic, Chinese, Czech, Danish, Dutch, English, Finnish, French, German, Hebrew, Hungarian, Italian, Japanese, Korean, Norwegian, Polish, Portuguese, Russian, Spanish, Swedish, Thai, Turkish, Ukrainian<br> + Vision: English<br> + Audio: English, Chinese, German, French, Italian, Japanese, Spanish, Portuguese<br> + **Release date:** February 2025<br> ### Training Datasets Phi-4-multimodal-instruct's training data includes a wide variety of sources, totaling 5 trillion text tokens, and is a combination of 1) publicly available documents filtered for quality, selected high-quality educational data, and code 2) newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (e.g., science, daily activities, theory of mind, etc.) 3) high quality human labeled data in chat format 4) selected high-quality image-text interleave data 5) synthetic and publicly available image, multi-image, and video data 6) anonymized in-house speech-text pair data with strong/weak transcriptions 7) selected high-quality publicly available and anonymized in-house speech data with task-specific supervisions 8) selected synthetic speech data 9) synthetic vision-speech data. Focus was placed on the quality of data that could potentially improve the reasoning ability for the model, and the publicly available documents were filtered to contain a preferred level of knowledge. As an example, the result of a game in premier league on a particular day might be good training data for large foundation models, but such information was removed for the Phi-4-multimodal-instruct to leave more model capacity for reasoning for the model's small size. The data collection process involved sourcing information from publicly available documents, with a focus on filtering out undesirable documents and images. To safeguard privacy, image and text data sources were filtered to remove or scrub potentially personal data from the training data. The decontamination process involved normalizing and tokenizing the dataset, then generating and comparing n-grams between the target dataset and benchmark datasets. Samples with matching n-grams above a threshold were flagged as contaminated and removed from the dataset. A detailed contamination report was generated, summarizing the matched text, matching ratio, and filtered results for further analysis. ### Fine-tuning A basic example of supervised fine-tuning (SFT) for [speech](https://huggingface.co/microsoft/Phi-4-multimodal-instruct/resolve/main/sample_finetune_speech.py) and [vision](https://huggingface.co/microsoft/Phi-4-multimodal-instruct/resolve/main/sample_finetune_vision.py) is provided respectively. ## Safety The Phi-4 family of models has adopted a robust safety post-training approach. This approach leverages a variety of both open-source and in-house generated datasets. The overall technique employed for safety alignment is a combination of SFT (Supervised Fine-Tuning), DPO (Direct Preference Optimization), and RLHF (Reinforcement Learning from Human Feedback) approaches by utilizing human-labeled and synthetic English-language datasets, including publicly available datasets focusing on helpfulness and harmlessness, as well as various questions and answers targeted to multiple safety categories. For non-English languages, existing datasets were extended via machine translation. Speech Safety datasets were generated by running Text Safety datasets through Azure TTS (Text-To-Speech) Service, for both English and non-English languages. Vision (text & images) Safety datasets were created to cover harm categories identified both in public and internal multi-modal RAI datasets. ### Safety Evaluation and Red-Teaming Various evaluation techniques including red teaming, adversarial conversation simulations, and multilingual safety evaluation benchmark datasets were leveraged to evaluate Phi-4 models' propensity to produce undesirable outputs across multiple languages and risk categories. Several approaches were used to compensate for the limitations of one approach alone. Findings across the various evaluation methods indicate that safety post-training that was done as detailed in the [Phi 3 Safety Post-Training paper](https://arxiv.org/abs/2407.13833) had a positive impact across multiple languages and risk categories as observed by refusal rates (refusal to output undesirable outputs) and robustness to jailbreak techniques. Details on prior red team evaluations across Phi models can be found in the [Phi 3 Safety Post-Training paper](https://arxiv.org/abs/2407.13833). For this release, the red teaming effort focused on the newest Audio input modality and on the following safety areas: harmful content, self-injury risks, and exploits. The model was found to be more susceptible to providing undesirable outputs when attacked with context manipulation or persuasive techniques. These findings applied to all languages, with the persuasive techniques mostly affecting French and Italian. This highlights the need for industry-wide investment in the development of high-quality safety evaluation datasets across multiple languages, including low resource languages, and risk areas that account for cultural nuances where those languages are spoken. ### Vision Safety Evaluation To assess model safety in scenarios involving both text and images, Microsoft's Azure AI Evaluation SDK was utilized. This tool facilitates the simulation of single-turn conversations with the target model by providing prompt text and images designed to incite harmful responses. The target model's responses are subsequently evaluated by a capable model across multiple harm categories, including violence, sexual content, self-harm, hateful and unfair content, with each response scored based on the severity of the harm identified. The evaluation results were compared with those of Phi-3.5-Vision and open-source models of comparable size. In addition, we ran both an internal and the public RTVLM and VLGuard multi-modal (text & vision) RAI benchmarks, once again comparing scores with Phi-3.5-Vision and open-source models of comparable size. However, the model may be susceptible to language-specific attack prompts and cultural context. ### Audio Safety Evaluation In addition to extensive red teaming, the Safety of the model was assessed through three distinct evaluations. First, as performed with Text and Vision inputs, Microsoft's Azure AI Evaluation SDK was leveraged to detect the presence of harmful content in the model's responses to Speech prompts. Second, [Microsoft's Speech Fairness evaluation](https://speech.microsoft.com/portal/responsibleai/assess) was run to verify that Speech-To-Text transcription worked well across a variety of demographics. Third, we proposed and evaluated a mitigation approach via a system message to help prevent the model from inferring sensitive attributes (such as gender, sexual orientation, profession, medical condition, etc...) from the voice of a user. ## Software * [PyTorch](https://github.com/pytorch/pytorch) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) * [Accelerate](https://huggingface.co/docs/transformers/main/en/accelerate) * [soundfile](https://github.com/bastibe/python-soundfile) * [pillow](https://github.com/python-pillow/Pillow) ## Hardware Note that by default, the Phi-4-multimodal-instruct model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: * NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager" ## License The model is licensed under the [MIT license](./LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies. ## Appendix A: Benchmark Methodology We include a brief word on methodology here - and in particular, how we think about optimizing prompts. In an ideal world, we would never change any prompts in our benchmarks to ensure it is always an apples-to-apples comparison when comparing different models. Indeed, this is our default approach, and is the case in the vast majority of models we have run to date. There are, however, some exceptions to this. In some cases, we see a model that performs worse than expected on a given eval due to a failure to respect the output format. For example: + A model may refuse to answer questions (for no apparent reason), or in coding tasks models may prefix their response with “Sure, I can help with that. …” which may break the parser. In such cases, we have opted to try different system messages (e.g. “You must always respond to a question” or “Get to the point!”). + Some models, we observed that few shots actually hurt model performance. In this case we did allow running the benchmarks with 0-shots for all cases. + We have tools to convert between chat and completions APIs. When converting a chat prompt to a completion prompt, some models have different keywords e.g. Human vs User. In these cases, we do allow for model-specific mappings for chat to completion prompts. However, we do not: + Pick different few-shot examples. Few shots will always be the same when comparing different models. + Change prompt format: e.g. if it is an A/B/C/D multiple choice, we do not tweak this to 1/2/3/4 multiple choice. ### Vision Benchmark Settings The goal of the benchmark setup is to measure the performance of the LMM when a regular user utilizes these models for a task involving visual input. To this end, we selected 9 popular and publicly available single-frame datasets and 3 multi-frame benchmarks that cover a wide range of challenging topics and tasks (e.g., mathematics, OCR tasks, charts-and-plots understanding, etc.) as well as a set of high-quality models. Our benchmarking setup utilizes zero-shot prompts and all the prompt content are the same for every model. We only formatted the prompt content to satisfy the model's prompt API. This ensures that our evaluation is fair across the set of models we tested. Many benchmarks necessitate models to choose their responses from a presented list of options. Therefore, we've included a directive in the prompt's conclusion, guiding all models to pick the option letter that corresponds to the answer they deem correct. In terms of the visual input, we use the images from the benchmarks as they come from the original datasets. We converted these images to base-64 using a JPEG encoding for models that require this format (e.g., GPTV, Claude Sonnet 3.5, Gemini 1.5 Pro/Flash). For other models (e.g., Llava Interleave, and InternVL2 4B and 8B), we used their Huggingface interface and passed in PIL images or a JPEG image stored locally. We did not scale or pre-process images in any other way. Lastly, we used the same code to extract answers and evaluate them using the same code for every considered model. This ensures that we are fair in assessing the quality of their answers. ### Speech Benchmark Settings The objective of this benchmarking setup is to assess the performance of models in speech and audio understanding tasks as utilized by regular users. To accomplish this, we selected several state-of-the-art open-sourced and closed-sourced models and performed evaluations across a variety of public and in-house benchmarks. These benchmarks encompass diverse and challenging topics, including Automatic Speech Recognition (ASR), Automatic Speech Translation (AST), Spoken Query Question Answering (SQQA), Audio Understanding (AU), and Speech Summarization. The results are derived from evaluations conducted on identical test data without any further clarifications. All results were obtained without sampling during inference. For an accurate comparison, we employed consistent prompts for models across different tasks, except for certain model APIs (e.g., GPT-4o), which may refuse to respond to specific prompts for some tasks. In conclusion, we used uniform code to extract answers and evaluate them for all considered models. This approach ensured fairness by assessing the quality of their responses. ### Benchmark datasets The model was evaluated across a breadth of public and internal benchmarks to understand it's capabilities under multiple tasks and conditions. While most evaluations use English, multilingual benchmark was incorporated to cover performance in select languages. More specifically, + Vision: + Popular aggregated benchmark: + MMMU and MMMU-Pro: massive multi-discipline tasks at college-level subject knowledge and deliberate reasoning. + MMBench: large-scale benchmark to evaluate perception and reasoning capabilities. + Visual reasoning: + ScienceQA: multimodal visual question answering on science. + MathVista: visual math reasoning. + InterGPS: Visual 2D geometry reasoning. + Chart reasoning: + ChartQA: visual and logical reasoning on charts. + AI2D: diagram understanding. + Document Intelligence: + TextVQA: read and reason about text in images to answer questions about them. + InfoVQA: read and reason about high-resolution infographics images with arbitrary aspect ratios. + DocVQA: read and reason about document images with dense texts and handwritten texts. + OCRBench: test OCR and QA capability on diverse text related images. + Vision speech multimodal understanding: + s_AI2D: diagram understanding with speech as the question format. + s_ChartQA: visual and logical reasoning on charts with speech as the question format. + s_InfoVQA: read and reason about high-resolution infographics images with speech as the question format. + s_DocVQA: read and reason about document images with dense texts and handwritten texts with speech as the question format. + RAI & Security Benchmarks: + VLGuardExt: VLGuard is a vision-language instruction following public dataset for model safety to address safety on deception discrimination, privacy and risky behavior (advice, sexual, violence, political). This was extended to a few internal categories such as child safety and election critical information. + RTVLM: Public benchmark for red-teaming vision-language model on model truthfulness, privacy, safety, and fairness. + GPTV-RAI: In-house benchmark for GPT-4V released from Azure AI, measuring harmfulness (ex. sexual, violent, hate and self-harm), privacy, jailbreak, misinformation. + Speech: + CommonVoice v15 is an open-source, multilingual speech dataset developed by Mozilla. It includes over 33,000 hours of speech data in 133 languages, contributed and validated by volunteers worldwide.The evaluations were conducted in the eight supported languages. + The OpenASR Leaderboard on Hugging Face is designed for benchmarking and evaluating the robustness of ASR models on English. The datasets in the leaderboard cover diverse speech domains including reading speech, conversations, meetings, and so on. + CoVoST2 is a multilingual speech-to-text translation dataset derived from Mozilla's Common Voice project. It is one of the largest open datasets available for speech translation, providing support for both X-to-English (X→En) and English-to-X (En→X) translation tasks. The directions with supported languages were evaluated on the test sets. + FLEURS is a multilingual speech dataset designed for evaluating speech recognition and speech-to-text translation models across a wide range of languages. The test sets for speech recognition and translation tasks were evaluated with the eight supported languages. + MT Bench (Multi-turn Benchmark) is specifically designed to evaluate the conversational and instruction-following abilities of AI models in multi-turn question-answering (QA) scenarios. To support spoken questions, the text is synthesized into speech. + MMMLU (Multilingual Massive Multitask Language Understanding) is an extensive benchmark designed to evaluate the general knowledge and reasoning capabilities of AI models across a wide array of subjects. To support spoken questions, the text is synthesized into its speech counterpart. The model was evaluated on the eight supported languages for this test set. + AIR-Bench Chat (Audio Instruction and Response Benchmark) is a comprehensive evaluation framework designed to test the capabilities of large audio language models (LALMs). It includes both foundation and chat benchmarks. The chat benchmark is selected for its open-ended question answering for audio capability. + MMAU (Massive Multi-Task Audio Understanding) is a comprehensive dataset designed to evaluate the capabilities of multi-modal models in audio-based understanding and reasoning tasks. The test sets are in the form of multiple-choices QA, covering the categories of music, sound, and speech. + Golden3 is a real-world meeting dataset, containing 108 meeting recordings with corresponding transcripts, averaging 6 minutes each. It is recorded across 30 conference rooms, featuring 4-8 attendees. The dataset is primarily in English, covering a wide range of topics. GPT4 is employed to generate summarization instructions that ask to summarize partial or the entire conversation or control the output style/length/structure. + AMI (Augmented Multi-Party Interaction) is a comprehensive collection of meeting recordings, encompassing approximately 100 hours of data. The test split contains 20 meeting recordings with an average duration of 32 minutes. The model was tested on the close-talking version of audio. GPT4 is employed to generate summarization instructions that ask to summarize partial or the entire conversation or control the output style/length/structure. + Safety and RAI: + Single-turn trustworthiness evaluation: + DecodingTrust: DecodingTrust is a collection of trustworthiness benchmarks in eight different perspectives + XSTest: XSTest is an exaggerated safety evaluation + Toxigen: Toxigen is adversarial and hate speech detection + Red Team: + Responses to prompts provided by AI Red Team at Microsoft
{"language": ["multilingual", "ar", "zh", "cs", "da", "nl", "en", "fi", "fr", "de", "he", "hu", "it", "ja", "ko", false, "pl", "pt", "ru", "es", "sv", "th", "tr", "uk"], "library_name": "transformers", "license": "mit", "license_link": "https://huggingface.co/microsoft/Phi-4-multimodal-instruct/resolve/main/LICENSE", "tags": ["nlp", "code", "audio", "automatic-speech-recognition", "speech-summarization", "speech-translation", "visual-question-answering", "phi-4-multimodal", "phi", "phi-4-mini"], "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}, {"example_title": "Librispeech sample 2", "src": "https://cdn-media.huggingface.co/speech_samples/sample2.flac"}, {"messages": [{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}]}]}
task
[ "QUESTION_ANSWERING", "TRANSLATION", "SUMMARIZATION" ]
46,058
MoritzLaurer/ernie-m-large-mnli-xnli
MoritzLaurer
zero-shot-classification
[ "transformers", "pytorch", "safetensors", "ernie_m", "text-classification", "zero-shot-classification", "nli", "multilingual", "en", "ar", "bg", "de", "el", "es", "fr", "hi", "ru", "sw", "th", "tr", "ur", "vi", "zh", "dataset:multi_nli", "dataset:xnli", "arxiv:2012.15674", "arxiv:1809.05053", "arxiv:2111.09543", "arxiv:1911.02116", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-02-16T18:00:07Z
2023-03-20T08:28:34+00:00
171
18
--- datasets: - multi_nli - xnli language: - multilingual - en - ar - bg - de - el - es - fr - hi - ru - sw - th - tr - ur - vi - zh license: apache-2.0 metrics: - accuracy pipeline_tag: zero-shot-classification tags: - zero-shot-classification - text-classification - nli - pytorch widget: - text: Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU candidate_labels: politics, economy, entertainment, environment --- # Multilingual ernie-m-large-mnli-xnli ## Model description This multilingual model can perform natural language inference (NLI) on 100 languages and is therefore also suitable for multilingual zero-shot classification. The underlying model was pre-trained by Baidu, based on Meta's RoBERTa (pre-trained on the [CC100 multilingual dataset](https://huggingface.co/datasets/cc100). It was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which contains hypothesis-premise pairs from 15 languages, as well as the English [MNLI dataset](https://huggingface.co/datasets/multi_nli). The model was introduced by Baidu in [this paper](https://arxiv.org/pdf/2012.15674.pdf). The model outperforms RoBERTa models of equal size. If you are looking for a much faster (but less performant) model, you can try [multilingual-MiniLMv2-L6-mnli-xnli](https://huggingface.co/MoritzLaurer/multilingual-MiniLMv2-L6-mnli-xnli). If you are looking for a base-sized model with a good mix of performance and speed, you can try [mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) ### How to use the model #### Simple zero-shot classification pipeline ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="MoritzLaurer/ernie-m-large-mnli-xnli") sequence_to_classify = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU" candidate_labels = ["politics", "economy", "entertainment", "environment"] output = classifier(sequence_to_classify, candidate_labels, multi_label=False) print(output) ``` #### NLI use-case ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") model_name = "MoritzLaurer/ernie-m-large-mnli-xnli" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name).to(device) premise = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU" hypothesis = "Emmanuel Macron is the President of France" input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt") output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu" prediction = torch.softmax(output["logits"][0], -1).tolist() label_names = ["entailment", "neutral", "contradiction"] prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)} print(prediction) ``` ### Training data This model was trained on the XNLI development dataset and the MNLI train dataset. The XNLI development set consists of 2490 professionally translated texts from English to 14 other languages (37350 texts in total) (see [this paper](https://arxiv.org/pdf/1809.05053.pdf)). Note that the XNLI contains a training set of 15 machine translated versions of the MNLI dataset for 15 languages, but due to quality issues with these machine translations, this model was only trained on the professional translations from the XNLI development set and the original English MNLI training set (392 702 texts). Not using machine translated texts can avoid overfitting the model to the 15 languages; avoids catastrophic forgetting of the other 85 languages ernie-m was pre-trained on; and significantly reduces training costs. ### Training procedure ernie-m-large-mnli-xnli was trained using the Hugging Face trainer with the following hyperparameters. ``` training_args = TrainingArguments( num_train_epochs=3, # total number of training epochs learning_rate=3e-05, per_device_train_batch_size=16, # batch size per device during training gradient_accumulation_steps=2, per_device_eval_batch_size=16, # batch size for evaluation warmup_ratio=0.1, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay fp16=True, ) ``` ### Eval results The model was evaluated on the XNLI test set on 15 languages (5010 texts per language, 75150 in total). Note that multilingual NLI models are capable of classifying NLI texts without receiving NLI training data in the specific language (cross-lingual transfer). This means that the model is also able of doing NLI on the other 85 languages mDeBERTa was training on, but performance is most likely lower than for those languages available in XNLI. Also note that if other multilingual models on the model hub claim performance of around 90% on languages other than English, the authors have most likely made a mistake during testing since non of the latest papers shows a multilingual average performance of more than a few points above 80% on XNLI (see [here](https://arxiv.org/pdf/2111.09543.pdf) or [here](https://arxiv.org/pdf/1911.02116.pdf)). |Datasets|avg_xnli|mnli_m|mnli_mm|ar|bg|de|el|en|es|fr|hi|ru|sw|th|tr|ur|vi|zh| | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | |Accuracy|0.822|0.881|0.878|0.818|0.853|0.84|0.837|0.882|0.855|0.849|0.799|0.83|0.751|0.809|0.818|0.76|0.826|0.799| |Inference text/sec (A100, batch=120)|1415.0|783.0|774.0|1487.0|1396.0|1430.0|1206.0|1623.0|1482.0|1291.0|1302.0|1366.0|1484.0|1500.0|1609.0|1344.0|1403.0|1302.0| ## Limitations and bias Please consult the original ernie-m paper and literature on different NLI datasets for potential biases. ## Citation If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. https://osf.io/74b8k. ## Ideas for cooperation or questions? If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/) ## Debugging and issues The ernie-m architecture is only supported with transformers==4.27 or higher (which is not yet released and causes an error in the inference widget as of 03.03.23). In order to run the model before the release of 4.27, you need to install transformers from source with: `pip install git+https://github.com/huggingface/transformers` as well as the sentencepiece tokenizer with: `pip install sentencepiece` After the release, you can run: `pip install transformers[sentencepiece]>=4.27`
null
Non_BioNLP
# Multilingual ernie-m-large-mnli-xnli ## Model description This multilingual model can perform natural language inference (NLI) on 100 languages and is therefore also suitable for multilingual zero-shot classification. The underlying model was pre-trained by Baidu, based on Meta's RoBERTa (pre-trained on the [CC100 multilingual dataset](https://huggingface.co/datasets/cc100). It was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which contains hypothesis-premise pairs from 15 languages, as well as the English [MNLI dataset](https://huggingface.co/datasets/multi_nli). The model was introduced by Baidu in [this paper](https://arxiv.org/pdf/2012.15674.pdf). The model outperforms RoBERTa models of equal size. If you are looking for a much faster (but less performant) model, you can try [multilingual-MiniLMv2-L6-mnli-xnli](https://huggingface.co/MoritzLaurer/multilingual-MiniLMv2-L6-mnli-xnli). If you are looking for a base-sized model with a good mix of performance and speed, you can try [mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) ### How to use the model #### Simple zero-shot classification pipeline ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="MoritzLaurer/ernie-m-large-mnli-xnli") sequence_to_classify = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU" candidate_labels = ["politics", "economy", "entertainment", "environment"] output = classifier(sequence_to_classify, candidate_labels, multi_label=False) print(output) ``` #### NLI use-case ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") model_name = "MoritzLaurer/ernie-m-large-mnli-xnli" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name).to(device) premise = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU" hypothesis = "Emmanuel Macron is the President of France" input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt") output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu" prediction = torch.softmax(output["logits"][0], -1).tolist() label_names = ["entailment", "neutral", "contradiction"] prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)} print(prediction) ``` ### Training data This model was trained on the XNLI development dataset and the MNLI train dataset. The XNLI development set consists of 2490 professionally translated texts from English to 14 other languages (37350 texts in total) (see [this paper](https://arxiv.org/pdf/1809.05053.pdf)). Note that the XNLI contains a training set of 15 machine translated versions of the MNLI dataset for 15 languages, but due to quality issues with these machine translations, this model was only trained on the professional translations from the XNLI development set and the original English MNLI training set (392 702 texts). Not using machine translated texts can avoid overfitting the model to the 15 languages; avoids catastrophic forgetting of the other 85 languages ernie-m was pre-trained on; and significantly reduces training costs. ### Training procedure ernie-m-large-mnli-xnli was trained using the Hugging Face trainer with the following hyperparameters. ``` training_args = TrainingArguments( num_train_epochs=3, # total number of training epochs learning_rate=3e-05, per_device_train_batch_size=16, # batch size per device during training gradient_accumulation_steps=2, per_device_eval_batch_size=16, # batch size for evaluation warmup_ratio=0.1, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay fp16=True, ) ``` ### Eval results The model was evaluated on the XNLI test set on 15 languages (5010 texts per language, 75150 in total). Note that multilingual NLI models are capable of classifying NLI texts without receiving NLI training data in the specific language (cross-lingual transfer). This means that the model is also able of doing NLI on the other 85 languages mDeBERTa was training on, but performance is most likely lower than for those languages available in XNLI. Also note that if other multilingual models on the model hub claim performance of around 90% on languages other than English, the authors have most likely made a mistake during testing since non of the latest papers shows a multilingual average performance of more than a few points above 80% on XNLI (see [here](https://arxiv.org/pdf/2111.09543.pdf) or [here](https://arxiv.org/pdf/1911.02116.pdf)). |Datasets|avg_xnli|mnli_m|mnli_mm|ar|bg|de|el|en|es|fr|hi|ru|sw|th|tr|ur|vi|zh| | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | |Accuracy|0.822|0.881|0.878|0.818|0.853|0.84|0.837|0.882|0.855|0.849|0.799|0.83|0.751|0.809|0.818|0.76|0.826|0.799| |Inference text/sec (A100, batch=120)|1415.0|783.0|774.0|1487.0|1396.0|1430.0|1206.0|1623.0|1482.0|1291.0|1302.0|1366.0|1484.0|1500.0|1609.0|1344.0|1403.0|1302.0| ## Limitations and bias Please consult the original ernie-m paper and literature on different NLI datasets for potential biases. ## Citation If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. https://osf.io/74b8k. ## Ideas for cooperation or questions? If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/) ## Debugging and issues The ernie-m architecture is only supported with transformers==4.27 or higher (which is not yet released and causes an error in the inference widget as of 03.03.23). In order to run the model before the release of 4.27, you need to install transformers from source with: `pip install git+https://github.com/huggingface/transformers` as well as the sentencepiece tokenizer with: `pip install sentencepiece` After the release, you can run: `pip install transformers[sentencepiece]>=4.27`
{"datasets": ["multi_nli", "xnli"], "language": ["multilingual", "en", "ar", "bg", "de", "el", "es", "fr", "hi", "ru", "sw", "th", "tr", "ur", "vi", "zh"], "license": "apache-2.0", "metrics": ["accuracy"], "pipeline_tag": "zero-shot-classification", "tags": ["zero-shot-classification", "text-classification", "nli", "pytorch"], "widget": [{"text": "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU", "candidate_labels": "politics, economy, entertainment, environment"}]}
task
[ "TRANSLATION" ]
46,059
marklim100/test-model-v2
marklim100
text-classification
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
2023-08-04T07:40:11Z
2023-08-04T07:40:28+00:00
11
0
--- license: apache-2.0 pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification --- # marklim100/test-model-v2 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("marklim100/test-model-v2") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
Non_BioNLP
# marklim100/test-model-v2 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("marklim100/test-model-v2") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
task
[ "TEXT_CLASSIFICATION" ]
46,061
PrimeQA/squad-v1-xlm-roberta-large
PrimeQA
null
[ "transformers", "pytorch", "xlm-roberta", "MRC", "SQuAD 1.1", "xlm-roberta-large", "multilingual", "arxiv:1606.05250", "arxiv:1910.07475", "arxiv:1910.11856", "arxiv:1911.02116", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2022-07-07T19:46:24Z
2022-07-07T20:28:50+00:00
45
0
--- language: - multilingual license: apache-2.0 tags: - MRC - SQuAD 1.1 - xlm-roberta-large --- # Model description An XLM-RoBERTa reading comprehension model for [SQuAD 1.1](https://aclanthology.org/D16-1264/). The model is initialized with [xlm-roberta-large](https://huggingface.co/xlm-roberta-large/) and fine-tuned on the [SQuAD 1.1 train data](https://huggingface.co/datasets/squad). ## Intended uses & limitations You can use the raw model for the reading comprehension task. Biases associated with the pre-existing language model, xlm-roberta-large, that we used may be present in our fine-tuned model, squad-v1-xlm-roberta-large. This model is used for zero-shot decoding of [MLQA](https://huggingface.co/datasets/mlqa) and [XQuAD](https://huggingface.co/datasets/xquad) datasets. ## Usage You can use this model directly with the [PrimeQA](https://github.com/primeqa/primeqa) pipeline for reading comprehension [squad.ipynb](https://github.com/primeqa/primeqa/blob/main/notebooks/mrc/squad.ipynb). ```bibtex @article{2016arXiv160605250R, author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev}, Konstantin and {Liang}, Percy}, title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}", journal = {arXiv e-prints}, year = 2016, eid = {arXiv:1606.05250}, pages = {arXiv:1606.05250}, archivePrefix = {arXiv}, eprint = {1606.05250}, } ``` ```bibtex @article{lewis2019mlqa, title={MLQA: Evaluating Cross-lingual Extractive Question Answering}, author={Lewis, Patrick and Oguz, Barlas and Rinott, Ruty and Riedel, Sebastian and Schwenk, Holger}, journal={arXiv preprint arXiv:1910.07475}, year={2019} } ``` ```bibtex @article{Artetxe:etal:2019, author = {Mikel Artetxe and Sebastian Ruder and Dani Yogatama}, title = {On the cross-lingual transferability of monolingual representations}, journal = {CoRR}, volume = {abs/1910.11856}, year = {2019}, archivePrefix = {arXiv}, eprint = {1910.11856} } ``` ```bibtex @article{DBLP:journals/corr/abs-1911-02116, author = {Alexis Conneau and Kartikay Khandelwal and Naman Goyal and Vishrav Chaudhary and Guillaume Wenzek and Francisco Guzm{\'{a}}n and Edouard Grave and Myle Ott and Luke Zettlemoyer and Veselin Stoyanov}, title = {Unsupervised Cross-lingual Representation Learning at Scale}, journal = {CoRR}, volume = {abs/1911.02116}, year = {2019}, url = {http://arxiv.org/abs/1911.02116}, eprinttype = {arXiv}, eprint = {1911.02116}, timestamp = {Mon, 11 Nov 2019 18:38:09 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1911-02116.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
null
Non_BioNLP
# Model description An XLM-RoBERTa reading comprehension model for [SQuAD 1.1](https://aclanthology.org/D16-1264/). The model is initialized with [xlm-roberta-large](https://huggingface.co/xlm-roberta-large/) and fine-tuned on the [SQuAD 1.1 train data](https://huggingface.co/datasets/squad). ## Intended uses & limitations You can use the raw model for the reading comprehension task. Biases associated with the pre-existing language model, xlm-roberta-large, that we used may be present in our fine-tuned model, squad-v1-xlm-roberta-large. This model is used for zero-shot decoding of [MLQA](https://huggingface.co/datasets/mlqa) and [XQuAD](https://huggingface.co/datasets/xquad) datasets. ## Usage You can use this model directly with the [PrimeQA](https://github.com/primeqa/primeqa) pipeline for reading comprehension [squad.ipynb](https://github.com/primeqa/primeqa/blob/main/notebooks/mrc/squad.ipynb). ```bibtex @article{2016arXiv160605250R, author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev}, Konstantin and {Liang}, Percy}, title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}", journal = {arXiv e-prints}, year = 2016, eid = {arXiv:1606.05250}, pages = {arXiv:1606.05250}, archivePrefix = {arXiv}, eprint = {1606.05250}, } ``` ```bibtex @article{lewis2019mlqa, title={MLQA: Evaluating Cross-lingual Extractive Question Answering}, author={Lewis, Patrick and Oguz, Barlas and Rinott, Ruty and Riedel, Sebastian and Schwenk, Holger}, journal={arXiv preprint arXiv:1910.07475}, year={2019} } ``` ```bibtex @article{Artetxe:etal:2019, author = {Mikel Artetxe and Sebastian Ruder and Dani Yogatama}, title = {On the cross-lingual transferability of monolingual representations}, journal = {CoRR}, volume = {abs/1910.11856}, year = {2019}, archivePrefix = {arXiv}, eprint = {1910.11856} } ``` ```bibtex @article{DBLP:journals/corr/abs-1911-02116, author = {Alexis Conneau and Kartikay Khandelwal and Naman Goyal and Vishrav Chaudhary and Guillaume Wenzek and Francisco Guzm{\'{a}}n and Edouard Grave and Myle Ott and Luke Zettlemoyer and Veselin Stoyanov}, title = {Unsupervised Cross-lingual Representation Learning at Scale}, journal = {CoRR}, volume = {abs/1911.02116}, year = {2019}, url = {http://arxiv.org/abs/1911.02116}, eprinttype = {arXiv}, eprint = {1911.02116}, timestamp = {Mon, 11 Nov 2019 18:38:09 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1911-02116.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"language": ["multilingual"], "license": "apache-2.0", "tags": ["MRC", "SQuAD 1.1", "xlm-roberta-large"]}
task
[ "QUESTION_ANSWERING" ]
46,062
Unbabel/wmt21-comet-da-marian
Unbabel
translation
[ "translation", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh", "license:apache-2.0", "region:us" ]
2024-05-28T10:56:29Z
2024-05-28T10:59:00+00:00
0
0
--- language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - 'no' - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh license: apache-2.0 pipeline_tag: translation --- Marian version of [wmt21-comet-da](https://huggingface.co/Unbabel/wmt21-comet-da). Credits to Microsoft Translate Team! # Paper TBA # License Apache-2.0 # Usage TBA # Intended uses Our model is intented to be used for **MT evaluation**. Given a a triplet with (source sentence, translation, reference translation) outputs a single score between 0 and 1 where 1 represents a perfect translation. # Languages Covered: This model builds on top of XLM-R which cover the following languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Basque, Belarusian, Bengali, Bengali Romanized, Bosnian, Breton, Bulgarian, Burmese, Burmese, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hausa, Hebrew, Hindi, Hindi Romanized, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish (Kurmanji), Kyrgyz, Lao, Latin, Latvian, Lithuanian, Macedonian, Malagasy, Malay, Malayalam, Marathi, Mongolian, Nepali, Norwegian, Oriya, Oromo, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Sanskri, Scottish, Gaelic, Serbian, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tamil, Tamil Romanized, Telugu, Telugu Romanized, Thai, Turkish, Ukrainian, Urdu, Urdu Romanized, Uyghur, Uzbek, Vietnamese, Welsh, Western, Frisian, Xhosa, Yiddish. Thus, results for language pairs containing uncovered languages are unreliable!
null
Non_BioNLP
Marian version of [wmt21-comet-da](https://huggingface.co/Unbabel/wmt21-comet-da). Credits to Microsoft Translate Team! # Paper TBA # License Apache-2.0 # Usage TBA # Intended uses Our model is intented to be used for **MT evaluation**. Given a a triplet with (source sentence, translation, reference translation) outputs a single score between 0 and 1 where 1 represents a perfect translation. # Languages Covered: This model builds on top of XLM-R which cover the following languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Basque, Belarusian, Bengali, Bengali Romanized, Bosnian, Breton, Bulgarian, Burmese, Burmese, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hausa, Hebrew, Hindi, Hindi Romanized, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish (Kurmanji), Kyrgyz, Lao, Latin, Latvian, Lithuanian, Macedonian, Malagasy, Malay, Malayalam, Marathi, Mongolian, Nepali, Norwegian, Oriya, Oromo, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Sanskri, Scottish, Gaelic, Serbian, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tamil, Tamil Romanized, Telugu, Telugu Romanized, Thai, Turkish, Ukrainian, Urdu, Urdu Romanized, Uyghur, Uzbek, Vietnamese, Welsh, Western, Frisian, Xhosa, Yiddish. Thus, results for language pairs containing uncovered languages are unreliable!
{"language": ["multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh"], "license": "apache-2.0", "pipeline_tag": "translation"}
task
[ "TRANSLATION" ]
46,063
allegro/p5-eng2many
allegro
translation
[ "transformers", "safetensors", "marian", "text2text-generation", "translation", "mt", "pytorch", "sentence-piece", "many2one", "multilingual", "pivot", "allegro", "laniqo", "cs", "en", "pl", "sk", "sl", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2025-01-27T16:15:06Z
2025-02-10T23:31:17+00:00
35
0
--- language: - cs - en - pl - sk - sl library_name: transformers license: cc-by-4.0 tags: - translation - mt - marian - pytorch - sentence-piece - many2one - multilingual - pivot - allegro - laniqo --- # MultiSlav P5-eng2many <p align="center"> <a href="https://ml.allegro.tech/"><img src="allegro-title.svg" alt="MLR @ Allegro.com"></a> </p> ## Multilingual English-to-Many MT Model ___P5-eng2many___ is an Encoder-Decoder vanilla transformer model trained on sentence-level Machine Translation task. Model is supporting translation from English language to 4 languages: Czech, Polish, Slovak and Slovene. This model is part of the [___MultiSlav___ collection](https://huggingface.co/collections/allegro/multislav-6793d6b6419e5963e759a683). More information will be available soon in our upcoming MultiSlav paper. Experiments were conducted under research project by [Machine Learning Research](https://ml.allegro.tech/) lab for [Allegro.com](https://ml.allegro.tech/). Big thanks to [laniqo.com](laniqo.com) for cooperation in the research. <p align="center"> <img src="p5-eng.svg"> </p> ___P5-eng2many___ - _5_-language _English-to-Many_ model translating from English to all applicable languages This model and [_P5-many2eng_](https://huggingface.co/allegro/P5-many2eng) combine into ___P5-eng___ pivot system translating between _5_ languages. _P5-eng_ translates all supported languages using Many2One model to English bridge sentence and next using the One2Many model from English bridge sentence to target language. ### Model description * **Model name:** P5-many2eng * **Source Language:** English * **Target Languages:** Czech, Polish, Slovak, Slovene * **Model Collection:** [MultiSlav](https://huggingface.co/collections/allegro/multislav-6793d6b6419e5963e759a683) * **Model type:** MarianMTModel Encoder-Decoder * **License:** CC BY 4.0 (commercial use allowed) * **Developed by:** [MLR @ Allegro](https://ml.allegro.tech/) & [Laniqo.com](https://laniqo.com/) ### Supported languages Using model you must specify target language for translation. Target language tokens are represented as 3-letter ISO 639-3 language codes embedded in a format >>xxx<<. All accepted directions and their respective tokens are listed below. Each of them was added as a special token to Sentence-Piece tokenizer. | **Target Language** | **First token** | |---------------------|-----------------| | Czech | `>>ces<<` | | Polish | `>>pol<<` | | Slovak | `>>slk<<` | | Slovene | `>>slv<<` | ## Use case quickstart Example code-snippet to use model. Due to bug the `MarianMTModel` must be used explicitly. ```python from transformers import AutoTokenizer, MarianMTModel o2m_model_name = "Allegro/P5-eng2many" o2m_tokenizer = AutoTokenizer.from_pretrained(o2m_model_name) o2m_model = MarianMTModel.from_pretrained(o2m_model_name) text = "Allegro is an online e-commerce platform on which medium and small companies as well as large brands sell their products." target_languages = ["ces", "pol", "slk", "slv"] batch_to_translate = [ f">>{lang}<<" + " " + text for lang in target_languages ] translations = o2m_model.generate(**o2m_tokenizer.batch_encode_plus(batch_to_translate, return_tensors="pt")) bridge_translations = o2m_tokenizer.batch_decode(translations, skip_special_tokens=True, clean_up_tokenization_spaces=True) for trans in bridge_translations: print(trans) ``` Generated Czech output: > Allegro je on-line e-commerce platforma, na které střední a malé firmy, stejně jako velké značky prodávají své produkty. Generated Polish output: > Allegro to internetowa platforma e-commerce, na której średnie i małe firmy oraz duże marki sprzedają swoje produkty. Generated Slovak output: > Allegro je online e-commerce platforma, na ktorej stredné a malé spoločnosti, ako aj veľké značky predávajú svoje produkty. Generated Slovene output: > Allegro je spletna e-poslovanje platforma, na kateri srednje in mala podjetja, kot tudi velike blagovne znamke prodajajo svoje izdelke. To pivot-translate to other languages via _bridge_ English sentence, we need One2Many model. Many2One model requires explicit source language token as well. Example for translating from Polish to Slovak: ```python from transformers import AutoTokenizer, MarianMTModel m2o_model_name = "Allegro/P5-many2eng" o2m_model_name = "Allegro/P5-eng2many" m2o_tokenizer = AutoTokenizer.from_pretrained(m2o_model_name) m2o_model = MarianMTModel.from_pretrained(m2o_model_name) o2m_tokenizer = AutoTokenizer.from_pretrained(o2m_model_name) o2m_model = MarianMTModel.from_pretrained(o2m_model_name) text = ">>pol<<" + " " + "Allegro to internetowa platforma e-commerce, na której swoje produkty sprzedają średnie i małe firmy, jak również duże marki." translation = m2o_model.generate(**m2o_tokenizer.batch_encode_plus([text], return_tensors="pt")) bridge_translations = m2o_tokenizer.batch_decode(translation, skip_special_tokens=True, clean_up_tokenization_spaces=True) post_edited_bridge = ">>slk<<" + " " + bridge_translations[0] translation = o2m_model.generate(**o2m_tokenizer.batch_encode_plus([post_edited_bridge], return_tensors="pt")) decoded_translations = o2m_tokenizer.batch_decode(translation, skip_special_tokens=True, clean_up_tokenization_spaces=True) print(decoded_translations[0]) ``` Generated Polish to Slovak pivot translation via English: > Allegro je online e-commerce platforma, kde stredné a malé firmy, ako aj veľké značky predávajú svoje produkty. ## Training [SentencePiece](https://github.com/google/sentencepiece) tokenizer has a vocab size 80k in total (16k per language). Tokenizer was trained on randomly sampled part of the training corpus. During the training we used the [MarianNMT](https://marian-nmt.github.io/) framework. Base marian configuration used: [transfromer-big](https://github.com/marian-nmt/marian-dev/blob/master/src/common/aliases.cpp#L113). All training parameters are listed in table below. ### Training hyperparameters: | **Hyperparameter** | **Value** | |----------------------------|------------------------------------------------------------------------------------------------------------| | Total Parameter Size | 258M | | Training Examples | 393M | | Vocab Size | 80k | | Base Parameters | [Marian transfromer-big](https://github.com/marian-nmt/marian-dev/blob/master/src/common/aliases.cpp#L113) | | Number of Encoding Layers | 6 | | Number of Decoding Layers | 6 | | Model Dimension | 1024 | | FF Dimension | 4096 | | Heads | 16 | | Dropout | 0.1 | | Batch Size | mini batch fit to VRAM | | Training Accelerators | 4x A100 40GB | | Max Length | 100 tokens | | Optimizer | Adam | | Warmup steps | 8000 | | Context | Sentence-level MT | | Source Language Supported | English | | Target Languages Supported | Czech, Polish, Slovak, Slovene | | Precision | float16 | | Validation Freq | 3000 steps | | Stop Metric | ChrF | | Stop Criterion | 20 Validation steps | ## Training corpora <p align="center"> <img src="pivot-data-eng2many.svg"> </p> The main research question was: "How does adding additional, related languages impact the quality of the model?" - we explored it in the Slavic language family. In this model we experimented with expanding data-regime by using data from multiple target language and expanding language-pool by adding English. We found that additional data clearly improved performance compared to the bi-directional baseline models. For example in translation from Polish to Czech, this allowed us to expand training data-size from 63M to 269M examples, and from 25M to 269M for Slovene to Czech translation. We only used explicitly open-source data to ensure open-source license of our model. Datasets were downloaded via [MT-Data](https://pypi.org/project/mtdata/0.2.10/) library. Number of total examples post filtering and deduplication: __269M__. The datasets used: | **Corpus** | |----------------------| | paracrawl | | opensubtitles | | multiparacrawl | | dgt | | elrc | | xlent | | wikititles | | wmt | | wikimatrix | | dcep | | ELRC | | tildemodel | | europarl | | eesc | | eubookshop | | emea | | jrc_acquis | | ema | | qed | | elitr_eca | | EU-dcep | | rapid | | ecb | | kde4 | | news_commentary | | kde | | bible_uedin | | europat | | elra | | wikipedia | | wikimedia | | tatoeba | | globalvoices | | euconst | | ubuntu | | php | | ecdc | | eac | | eac_reference | | gnome | | EU-eac | | books | | EU-ecdc | | newsdev | | khresmoi_summary | | czechtourism | | khresmoi_summary_dev | | worldbank | ## Evaluation Evaluation of the models was performed on [Flores200](https://huggingface.co/datasets/facebook/flores) dataset. The table below compares performance of the open-source models and all applicable models from our collection. Metrics BLEU, ChrF2, and Unbabel/wmt22-comet-da. Translation results on translation from Polish to Czech (Slavic direction with the __highest__ data-regime): | **Model** | **Comet22** | **BLEU** | **ChrF** | **Model Size** | |------------------------------------------------------------------------------|:-----------:|:--------:|:--------:|---------------:| | M2M−100 | 89.6 | 19.8 | 47.7 | 1.2B | | NLLB−200 | 89.4 | 19.2 | 46.7 | 1.3B | | Opus Sla-Sla | 82.9 | 14.6 | 42.6 | 64M | | BiDi-ces-pol (baseline) | 90.0 | 20.3 | 48.5 | 209M | | P4-pol <span style="color:red;">◊</span> | 90.2 | 20.2 | 48.5 | 2x 242M | | P5-eng <span style="color:red;">◊</span> <span style="color:green;">*</span> | 89.0 | 19.9 | 48.3 | 2x 258M | | P5-many2ces | 90.3 | 20.2 | 48.6 | 258M | | MultiSlav-4slav | 90.2 | 20.6 | 48.7 | 242M | | MultiSlav-5lang | __90.4__ | __20.7__ | __48.9__ | 258M | Translation results on translation from Slovak to Slovene (Slavic direction with the __lowest__ data-regime): | **Model** | **Comet22** | **BLEU** | **ChrF** | **Model Size** | |------------------------------------------------------------------------------|:-----------:|:--------:|:--------:|---------------:| | M2M−100 | 89.6 | 26.6 | 55.0 | 1.2B | | NLLB−200 | 88.8 | 23.3 | 42.0 | 1.3B | | BiDi-slk-slv (baseline) | 89.4 | 26.6 | 55.4 | 209M | | P4-pol <span style="color:red;">◊</span> | 88.4 | 24.8 | 53.2 | 2x 242M | | P5-eng <span style="color:red;">◊</span> <span style="color:green;">*</span> | 88.5 | 25.6 | 54.6 | 2x 258M | | P5-ces <span style="color:red;">◊</span> | 89.8 | 26.6 | 55.3 | 2x 258M | | MultiSlav-4slav | 90.1 | __27.1__ | __55.7__ | 242M | | MultiSlav-5lang | __90.2__ | __27.1__ | __55.7__ | 258M | <span style="color:green;">*</span> this model is One2Many part of P5-eng pivot system. <span style="color:red;">◊</span> system of 2 models *Many2XXX* and *XXX2Many*. ## Limitations and Biases We did not evaluate inherent bias contained in training datasets. It is advised to validate bias of our models in perspective domain. This might be especially problematic in translation from English to Slavic languages, which require explicitly indicated gender and might hallucinate based on bias present in training data. ## License The model is licensed under CC BY 4.0, which allows for commercial use. ## Citation TO BE UPDATED SOON 🤗 ## Contact Options Authors: - MLR @ Allegro: [Artur Kot](https://linkedin.com/in/arturkot), [Mikołaj Koszowski](https://linkedin.com/in/mkoszowski), [Wojciech Chojnowski](https://linkedin.com/in/wojciech-chojnowski-744702348), [Mieszko Rutkowski](https://linkedin.com/in/mieszko-rutkowski) - Laniqo.com: [Artur Nowakowski](https://linkedin.com/in/artur-nowakowski-mt), [Kamil Guttmann](https://linkedin.com/in/kamil-guttmann), [Mikołaj Pokrywka](https://linkedin.com/in/mikolaj-pokrywka) Please don't hesitate to contact authors if you have any questions or suggestions: - e-mail: [email protected] or [email protected] - LinkedIn: [Artur Kot](https://linkedin.com/in/arturkot) or [Mikołaj Koszowski](https://linkedin.com/in/mkoszowski)
null
Non_BioNLP
# MultiSlav P5-eng2many <p align="center"> <a href="https://ml.allegro.tech/"><img src="allegro-title.svg" alt="MLR @ Allegro.com"></a> </p> ## Multilingual English-to-Many MT Model ___P5-eng2many___ is an Encoder-Decoder vanilla transformer model trained on sentence-level Machine Translation task. Model is supporting translation from English language to 4 languages: Czech, Polish, Slovak and Slovene. This model is part of the [___MultiSlav___ collection](https://huggingface.co/collections/allegro/multislav-6793d6b6419e5963e759a683). More information will be available soon in our upcoming MultiSlav paper. Experiments were conducted under research project by [Machine Learning Research](https://ml.allegro.tech/) lab for [Allegro.com](https://ml.allegro.tech/). Big thanks to [laniqo.com](laniqo.com) for cooperation in the research. <p align="center"> <img src="p5-eng.svg"> </p> ___P5-eng2many___ - _5_-language _English-to-Many_ model translating from English to all applicable languages This model and [_P5-many2eng_](https://huggingface.co/allegro/P5-many2eng) combine into ___P5-eng___ pivot system translating between _5_ languages. _P5-eng_ translates all supported languages using Many2One model to English bridge sentence and next using the One2Many model from English bridge sentence to target language. ### Model description * **Model name:** P5-many2eng * **Source Language:** English * **Target Languages:** Czech, Polish, Slovak, Slovene * **Model Collection:** [MultiSlav](https://huggingface.co/collections/allegro/multislav-6793d6b6419e5963e759a683) * **Model type:** MarianMTModel Encoder-Decoder * **License:** CC BY 4.0 (commercial use allowed) * **Developed by:** [MLR @ Allegro](https://ml.allegro.tech/) & [Laniqo.com](https://laniqo.com/) ### Supported languages Using model you must specify target language for translation. Target language tokens are represented as 3-letter ISO 639-3 language codes embedded in a format >>xxx<<. All accepted directions and their respective tokens are listed below. Each of them was added as a special token to Sentence-Piece tokenizer. | **Target Language** | **First token** | |---------------------|-----------------| | Czech | `>>ces<<` | | Polish | `>>pol<<` | | Slovak | `>>slk<<` | | Slovene | `>>slv<<` | ## Use case quickstart Example code-snippet to use model. Due to bug the `MarianMTModel` must be used explicitly. ```python from transformers import AutoTokenizer, MarianMTModel o2m_model_name = "Allegro/P5-eng2many" o2m_tokenizer = AutoTokenizer.from_pretrained(o2m_model_name) o2m_model = MarianMTModel.from_pretrained(o2m_model_name) text = "Allegro is an online e-commerce platform on which medium and small companies as well as large brands sell their products." target_languages = ["ces", "pol", "slk", "slv"] batch_to_translate = [ f">>{lang}<<" + " " + text for lang in target_languages ] translations = o2m_model.generate(**o2m_tokenizer.batch_encode_plus(batch_to_translate, return_tensors="pt")) bridge_translations = o2m_tokenizer.batch_decode(translations, skip_special_tokens=True, clean_up_tokenization_spaces=True) for trans in bridge_translations: print(trans) ``` Generated Czech output: > Allegro je on-line e-commerce platforma, na které střední a malé firmy, stejně jako velké značky prodávají své produkty. Generated Polish output: > Allegro to internetowa platforma e-commerce, na której średnie i małe firmy oraz duże marki sprzedają swoje produkty. Generated Slovak output: > Allegro je online e-commerce platforma, na ktorej stredné a malé spoločnosti, ako aj veľké značky predávajú svoje produkty. Generated Slovene output: > Allegro je spletna e-poslovanje platforma, na kateri srednje in mala podjetja, kot tudi velike blagovne znamke prodajajo svoje izdelke. To pivot-translate to other languages via _bridge_ English sentence, we need One2Many model. Many2One model requires explicit source language token as well. Example for translating from Polish to Slovak: ```python from transformers import AutoTokenizer, MarianMTModel m2o_model_name = "Allegro/P5-many2eng" o2m_model_name = "Allegro/P5-eng2many" m2o_tokenizer = AutoTokenizer.from_pretrained(m2o_model_name) m2o_model = MarianMTModel.from_pretrained(m2o_model_name) o2m_tokenizer = AutoTokenizer.from_pretrained(o2m_model_name) o2m_model = MarianMTModel.from_pretrained(o2m_model_name) text = ">>pol<<" + " " + "Allegro to internetowa platforma e-commerce, na której swoje produkty sprzedają średnie i małe firmy, jak również duże marki." translation = m2o_model.generate(**m2o_tokenizer.batch_encode_plus([text], return_tensors="pt")) bridge_translations = m2o_tokenizer.batch_decode(translation, skip_special_tokens=True, clean_up_tokenization_spaces=True) post_edited_bridge = ">>slk<<" + " " + bridge_translations[0] translation = o2m_model.generate(**o2m_tokenizer.batch_encode_plus([post_edited_bridge], return_tensors="pt")) decoded_translations = o2m_tokenizer.batch_decode(translation, skip_special_tokens=True, clean_up_tokenization_spaces=True) print(decoded_translations[0]) ``` Generated Polish to Slovak pivot translation via English: > Allegro je online e-commerce platforma, kde stredné a malé firmy, ako aj veľké značky predávajú svoje produkty. ## Training [SentencePiece](https://github.com/google/sentencepiece) tokenizer has a vocab size 80k in total (16k per language). Tokenizer was trained on randomly sampled part of the training corpus. During the training we used the [MarianNMT](https://marian-nmt.github.io/) framework. Base marian configuration used: [transfromer-big](https://github.com/marian-nmt/marian-dev/blob/master/src/common/aliases.cpp#L113). All training parameters are listed in table below. ### Training hyperparameters: | **Hyperparameter** | **Value** | |----------------------------|------------------------------------------------------------------------------------------------------------| | Total Parameter Size | 258M | | Training Examples | 393M | | Vocab Size | 80k | | Base Parameters | [Marian transfromer-big](https://github.com/marian-nmt/marian-dev/blob/master/src/common/aliases.cpp#L113) | | Number of Encoding Layers | 6 | | Number of Decoding Layers | 6 | | Model Dimension | 1024 | | FF Dimension | 4096 | | Heads | 16 | | Dropout | 0.1 | | Batch Size | mini batch fit to VRAM | | Training Accelerators | 4x A100 40GB | | Max Length | 100 tokens | | Optimizer | Adam | | Warmup steps | 8000 | | Context | Sentence-level MT | | Source Language Supported | English | | Target Languages Supported | Czech, Polish, Slovak, Slovene | | Precision | float16 | | Validation Freq | 3000 steps | | Stop Metric | ChrF | | Stop Criterion | 20 Validation steps | ## Training corpora <p align="center"> <img src="pivot-data-eng2many.svg"> </p> The main research question was: "How does adding additional, related languages impact the quality of the model?" - we explored it in the Slavic language family. In this model we experimented with expanding data-regime by using data from multiple target language and expanding language-pool by adding English. We found that additional data clearly improved performance compared to the bi-directional baseline models. For example in translation from Polish to Czech, this allowed us to expand training data-size from 63M to 269M examples, and from 25M to 269M for Slovene to Czech translation. We only used explicitly open-source data to ensure open-source license of our model. Datasets were downloaded via [MT-Data](https://pypi.org/project/mtdata/0.2.10/) library. Number of total examples post filtering and deduplication: __269M__. The datasets used: | **Corpus** | |----------------------| | paracrawl | | opensubtitles | | multiparacrawl | | dgt | | elrc | | xlent | | wikititles | | wmt | | wikimatrix | | dcep | | ELRC | | tildemodel | | europarl | | eesc | | eubookshop | | emea | | jrc_acquis | | ema | | qed | | elitr_eca | | EU-dcep | | rapid | | ecb | | kde4 | | news_commentary | | kde | | bible_uedin | | europat | | elra | | wikipedia | | wikimedia | | tatoeba | | globalvoices | | euconst | | ubuntu | | php | | ecdc | | eac | | eac_reference | | gnome | | EU-eac | | books | | EU-ecdc | | newsdev | | khresmoi_summary | | czechtourism | | khresmoi_summary_dev | | worldbank | ## Evaluation Evaluation of the models was performed on [Flores200](https://huggingface.co/datasets/facebook/flores) dataset. The table below compares performance of the open-source models and all applicable models from our collection. Metrics BLEU, ChrF2, and Unbabel/wmt22-comet-da. Translation results on translation from Polish to Czech (Slavic direction with the __highest__ data-regime): | **Model** | **Comet22** | **BLEU** | **ChrF** | **Model Size** | |------------------------------------------------------------------------------|:-----------:|:--------:|:--------:|---------------:| | M2M−100 | 89.6 | 19.8 | 47.7 | 1.2B | | NLLB−200 | 89.4 | 19.2 | 46.7 | 1.3B | | Opus Sla-Sla | 82.9 | 14.6 | 42.6 | 64M | | BiDi-ces-pol (baseline) | 90.0 | 20.3 | 48.5 | 209M | | P4-pol <span style="color:red;">◊</span> | 90.2 | 20.2 | 48.5 | 2x 242M | | P5-eng <span style="color:red;">◊</span> <span style="color:green;">*</span> | 89.0 | 19.9 | 48.3 | 2x 258M | | P5-many2ces | 90.3 | 20.2 | 48.6 | 258M | | MultiSlav-4slav | 90.2 | 20.6 | 48.7 | 242M | | MultiSlav-5lang | __90.4__ | __20.7__ | __48.9__ | 258M | Translation results on translation from Slovak to Slovene (Slavic direction with the __lowest__ data-regime): | **Model** | **Comet22** | **BLEU** | **ChrF** | **Model Size** | |------------------------------------------------------------------------------|:-----------:|:--------:|:--------:|---------------:| | M2M−100 | 89.6 | 26.6 | 55.0 | 1.2B | | NLLB−200 | 88.8 | 23.3 | 42.0 | 1.3B | | BiDi-slk-slv (baseline) | 89.4 | 26.6 | 55.4 | 209M | | P4-pol <span style="color:red;">◊</span> | 88.4 | 24.8 | 53.2 | 2x 242M | | P5-eng <span style="color:red;">◊</span> <span style="color:green;">*</span> | 88.5 | 25.6 | 54.6 | 2x 258M | | P5-ces <span style="color:red;">◊</span> | 89.8 | 26.6 | 55.3 | 2x 258M | | MultiSlav-4slav | 90.1 | __27.1__ | __55.7__ | 242M | | MultiSlav-5lang | __90.2__ | __27.1__ | __55.7__ | 258M | <span style="color:green;">*</span> this model is One2Many part of P5-eng pivot system. <span style="color:red;">◊</span> system of 2 models *Many2XXX* and *XXX2Many*. ## Limitations and Biases We did not evaluate inherent bias contained in training datasets. It is advised to validate bias of our models in perspective domain. This might be especially problematic in translation from English to Slavic languages, which require explicitly indicated gender and might hallucinate based on bias present in training data. ## License The model is licensed under CC BY 4.0, which allows for commercial use. ## Citation TO BE UPDATED SOON 🤗 ## Contact Options Authors: - MLR @ Allegro: [Artur Kot](https://linkedin.com/in/arturkot), [Mikołaj Koszowski](https://linkedin.com/in/mkoszowski), [Wojciech Chojnowski](https://linkedin.com/in/wojciech-chojnowski-744702348), [Mieszko Rutkowski](https://linkedin.com/in/mieszko-rutkowski) - Laniqo.com: [Artur Nowakowski](https://linkedin.com/in/artur-nowakowski-mt), [Kamil Guttmann](https://linkedin.com/in/kamil-guttmann), [Mikołaj Pokrywka](https://linkedin.com/in/mikolaj-pokrywka) Please don't hesitate to contact authors if you have any questions or suggestions: - e-mail: [email protected] or [email protected] - LinkedIn: [Artur Kot](https://linkedin.com/in/arturkot) or [Mikołaj Koszowski](https://linkedin.com/in/mkoszowski)
{"language": ["cs", "en", "pl", "sk", "sl"], "library_name": "transformers", "license": "cc-by-4.0", "tags": ["translation", "mt", "marian", "pytorch", "sentence-piece", "many2one", "multilingual", "pivot", "allegro", "laniqo"]}
task
[ "TRANSLATION" ]
46,064
DigitalUmuganda/Nllb_finetuned_tourism_en_kin
DigitalUmuganda
text2text-generation
[ "transformers", "pytorch", "m2m_100", "text2text-generation", "en", "rw", "dataset:mbazaNLP/NMT_Tourism_parallel_data_en_kin", "dataset:mbazaNLP/NMT_Education_parallel_data_en_kin", "dataset:mbazaNLP/Kinyarwanda_English_parallel_dataset", "license:cc-by-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-08-29T04:58:58Z
2023-08-30T23:52:42+00:00
13
1
--- datasets: - mbazaNLP/NMT_Tourism_parallel_data_en_kin - mbazaNLP/NMT_Education_parallel_data_en_kin - mbazaNLP/Kinyarwanda_English_parallel_dataset language: - en - rw library_name: transformers license: cc-by-2.0 --- ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is a Machine Translation model, finetuned from [NLLB](https://huggingface.co/facebook/nllb-200-distilled-1.3B)-200's distilled 1.3B model, it is meant to be used in machine translation for education-related data. - **Finetuning code repository:** the code used to finetune this model can be found [here](https://github.com/Digital-Umuganda/twb_nllb_finetuning) <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## How to Get Started with the Model Use the code below to get started with the model. ### Training Procedure The model was finetuned on three datasets; a [general](https://huggingface.co/datasets/mbazaNLP/Kinyarwanda_English_parallel_dataset) purpose dataset, a [tourism](https://huggingface.co/datasets/mbazaNLP/NMT_Tourism_parallel_data_en_kin), and an [education](https://huggingface.co/datasets/mbazaNLP/NMT_Education_parallel_data_en_kin) dataset. The model was finetuned in two phases. #### Phase one: - General purpose dataset - Education dataset - Tourism dataset #### Phase two: - Tourism dataset Other than the dataset changes between phase one, and phase two finetuning; no other hyperparameters were modified. In both cases, the model was trained on an A100 40GB GPU for two epochs. ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> #### Testing Data <!-- This should link to a Data Card if possible. --> #### Metrics Model performance was measured using BLEU, spBLEU, and chrF++ metrics. ### Results <!-- [More Information Needed] -->
null
Non_BioNLP
## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is a Machine Translation model, finetuned from [NLLB](https://huggingface.co/facebook/nllb-200-distilled-1.3B)-200's distilled 1.3B model, it is meant to be used in machine translation for education-related data. - **Finetuning code repository:** the code used to finetune this model can be found [here](https://github.com/Digital-Umuganda/twb_nllb_finetuning) <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## How to Get Started with the Model Use the code below to get started with the model. ### Training Procedure The model was finetuned on three datasets; a [general](https://huggingface.co/datasets/mbazaNLP/Kinyarwanda_English_parallel_dataset) purpose dataset, a [tourism](https://huggingface.co/datasets/mbazaNLP/NMT_Tourism_parallel_data_en_kin), and an [education](https://huggingface.co/datasets/mbazaNLP/NMT_Education_parallel_data_en_kin) dataset. The model was finetuned in two phases. #### Phase one: - General purpose dataset - Education dataset - Tourism dataset #### Phase two: - Tourism dataset Other than the dataset changes between phase one, and phase two finetuning; no other hyperparameters were modified. In both cases, the model was trained on an A100 40GB GPU for two epochs. ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> #### Testing Data <!-- This should link to a Data Card if possible. --> #### Metrics Model performance was measured using BLEU, spBLEU, and chrF++ metrics. ### Results <!-- [More Information Needed] -->
{"datasets": ["mbazaNLP/NMT_Tourism_parallel_data_en_kin", "mbazaNLP/NMT_Education_parallel_data_en_kin", "mbazaNLP/Kinyarwanda_English_parallel_dataset"], "language": ["en", "rw"], "library_name": "transformers", "license": "cc-by-2.0"}
task
[ "TRANSLATION" ]
46,065
gaudi/opus-mt-en-ber-ctranslate2
gaudi
translation
[ "transformers", "marian", "ctranslate2", "translation", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2024-07-18T14:57:16Z
2024-10-19T00:05:23+00:00
6
0
--- license: apache-2.0 tags: - ctranslate2 - translation --- # Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-en-ber) - This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2). - This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil). # What is CTranslate2? [CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models. CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include: - Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper - Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon - Encoder-only models: BERT, DistilBERT, XLM-RoBERTa The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration. # CTranslate2 Benchmarks Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset. The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers. Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. ## CPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 | | Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 | | Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 | | CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 | | CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 | ## GPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 | | Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 | | CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 | | CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 | `Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.` **Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br /> **Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-en-ber).** ## Internal Benchmarks Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality. # CTranslate2 Installation ```bash pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0 ``` ### ct2-transformers-converter Command Used: ```bash ct2-transformers-converter --model Helsinki-NLP/opus-mt-en-ber --output_dir ./ctranslate2/opus-mt-en-ber-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16 ``` # CTranslate2 Converted Checkpoint Information: **Compatible With:** - [ctranslate2](https://github.com/OpenNMT/CTranslate2) - [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2) **Compute Type:** - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` # Sample Code - ctranslate2 #### Clone the repository to the working directory or wherever you wish to store the model artifacts. #### ```bash git clone https://huggingface.co/gaudi/opus-mt-en-ber-ctranslate2 ``` #### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. #### ```python from ctranslate2 import Translator import transformers model_dir = "./opus-mt-en-ber-ctranslate2" # Path to model directory. translator = Translator( model_path=model_dir, device="cuda", # cpu, cuda, or auto. inter_threads=1, # Maximum number of parallel translations. intra_threads=4, # Number of OpenMP threads per translator. compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda. ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir) source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX.")) results = translator.translate_batch([source]) target = results[0].hypotheses[0] print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target))) ``` # Sample Code - hf-hub-ctranslate2 **Derived From [michaelfeil](https://huggingface.co/michaelfeil):** ```python from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub from transformers import AutoTokenizer model_name = "gaudi/opus-mt-en-ber-ctranslate2" model = TranslatorCT2fromHfHub( model_name_or_path=model_name, device="cuda", compute_type="int8_float16", tokenizer=AutoTokenizer.from_pretrained(model_name) ) outputs = model.generate( text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"], ) print(outputs) ``` # License and other remarks: License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-en-ber) by Helsinki-NLP.
null
Non_BioNLP
# Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-en-ber) - This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2). - This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil). # What is CTranslate2? [CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models. CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include: - Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper - Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon - Encoder-only models: BERT, DistilBERT, XLM-RoBERTa The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration. # CTranslate2 Benchmarks Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset. The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers. Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. ## CPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 | | Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 | | Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 | | CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 | | CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 | ## GPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 | | Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 | | CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 | | CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 | `Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.` **Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br /> **Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-en-ber).** ## Internal Benchmarks Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality. # CTranslate2 Installation ```bash pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0 ``` ### ct2-transformers-converter Command Used: ```bash ct2-transformers-converter --model Helsinki-NLP/opus-mt-en-ber --output_dir ./ctranslate2/opus-mt-en-ber-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16 ``` # CTranslate2 Converted Checkpoint Information: **Compatible With:** - [ctranslate2](https://github.com/OpenNMT/CTranslate2) - [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2) **Compute Type:** - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` # Sample Code - ctranslate2 #### Clone the repository to the working directory or wherever you wish to store the model artifacts. #### ```bash git clone https://huggingface.co/gaudi/opus-mt-en-ber-ctranslate2 ``` #### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. #### ```python from ctranslate2 import Translator import transformers model_dir = "./opus-mt-en-ber-ctranslate2" # Path to model directory. translator = Translator( model_path=model_dir, device="cuda", # cpu, cuda, or auto. inter_threads=1, # Maximum number of parallel translations. intra_threads=4, # Number of OpenMP threads per translator. compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda. ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir) source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX.")) results = translator.translate_batch([source]) target = results[0].hypotheses[0] print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target))) ``` # Sample Code - hf-hub-ctranslate2 **Derived From [michaelfeil](https://huggingface.co/michaelfeil):** ```python from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub from transformers import AutoTokenizer model_name = "gaudi/opus-mt-en-ber-ctranslate2" model = TranslatorCT2fromHfHub( model_name_or_path=model_name, device="cuda", compute_type="int8_float16", tokenizer=AutoTokenizer.from_pretrained(model_name) ) outputs = model.generate( text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"], ) print(outputs) ``` # License and other remarks: License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-en-ber) by Helsinki-NLP.
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
task
[ "TRANSLATION" ]
46,066
joaobarroca/distilbert-base-uncased-finetuned-massive-intent-detection-english
joaobarroca
text-classification
[ "transformers", "pytorch", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:massive", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-05-24T14:17:52Z
2023-10-18T22:14:39+00:00
149
5
--- base_model: distilbert-base-uncased datasets: - massive license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-massive-intent-detection-english results: - task: type: text-classification name: Text Classification dataset: name: massive type: massive args: en-US metrics: - type: accuracy value: 0.886684599865501 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-massive-intent-detection-english This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the massive dataset. It achieves the following results on the evaluation set: - Loss: 0.4873 - Accuracy: 0.8867 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.5849 | 1.0 | 360 | 1.3826 | 0.7359 | | 1.0662 | 2.0 | 720 | 0.7454 | 0.8357 | | 0.5947 | 3.0 | 1080 | 0.5668 | 0.8642 | | 0.3824 | 4.0 | 1440 | 0.5007 | 0.8770 | | 0.2649 | 5.0 | 1800 | 0.4829 | 0.8824 | | 0.1877 | 6.0 | 2160 | 0.4843 | 0.8824 | | 0.1377 | 7.0 | 2520 | 0.4858 | 0.8834 | | 0.1067 | 8.0 | 2880 | 0.4924 | 0.8864 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-massive-intent-detection-english This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the massive dataset. It achieves the following results on the evaluation set: - Loss: 0.4873 - Accuracy: 0.8867 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.5849 | 1.0 | 360 | 1.3826 | 0.7359 | | 1.0662 | 2.0 | 720 | 0.7454 | 0.8357 | | 0.5947 | 3.0 | 1080 | 0.5668 | 0.8642 | | 0.3824 | 4.0 | 1440 | 0.5007 | 0.8770 | | 0.2649 | 5.0 | 1800 | 0.4829 | 0.8824 | | 0.1877 | 6.0 | 2160 | 0.4843 | 0.8824 | | 0.1377 | 7.0 | 2520 | 0.4858 | 0.8834 | | 0.1067 | 8.0 | 2880 | 0.4924 | 0.8864 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
{"base_model": "distilbert-base-uncased", "datasets": ["massive"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-massive-intent-detection-english", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "massive", "type": "massive", "args": "en-US"}, "metrics": [{"type": "accuracy", "value": 0.886684599865501, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,067
KarelDO/gpt2.CEBaB_confounding.observational.sa.5-class.seed_44
KarelDO
null
[ "transformers", "pytorch", "gpt2", "generated_from_trainer", "en", "dataset:OpenTable", "license:mit", "model-index", "text-generation-inference", "endpoints_compatible", "region:us" ]
2022-10-14T03:39:54Z
2022-10-14T03:42:29+00:00
14
0
--- datasets: - OpenTable language: - en license: mit metrics: - accuracy tags: - generated_from_trainer model-index: - name: gpt2.CEBaB_confounding.observational.sa.5-class.seed_44 results: - task: type: text-classification name: Text Classification dataset: name: OpenTable OPENTABLE type: OpenTable args: opentable metrics: - type: accuracy value: 0.5636580992229527 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2.CEBaB_confounding.observational.sa.5-class.seed_44 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the OpenTable OPENTABLE dataset. It achieves the following results on the evaluation set: - Loss: 0.9660 - Accuracy: 0.5637 - Macro-f1: 0.4288 - Weighted-macro-f1: 0.4995 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 44 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.2+cu102 - Datasets 2.5.2 - Tokenizers 0.12.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2.CEBaB_confounding.observational.sa.5-class.seed_44 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the OpenTable OPENTABLE dataset. It achieves the following results on the evaluation set: - Loss: 0.9660 - Accuracy: 0.5637 - Macro-f1: 0.4288 - Weighted-macro-f1: 0.4995 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 44 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.2+cu102 - Datasets 2.5.2 - Tokenizers 0.12.1
{"datasets": ["OpenTable"], "language": ["en"], "license": "mit", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "gpt2.CEBaB_confounding.observational.sa.5-class.seed_44", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "OpenTable OPENTABLE", "type": "OpenTable", "args": "opentable"}, "metrics": [{"type": "accuracy", "value": 0.5636580992229527, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,068
flax-community/clip-vision-bert-vqa-ft-6k
flax-community
text-classification
[ "transformers", "jax", "clip-vision-bert", "text-classification", "arxiv:1908.03557", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05Z
2021-07-21T09:21:58+00:00
18
4
--- {} --- # CLIP-Vision-BERT Multilingual VQA Model Fine-tuned CLIP-Vision-BERT on translated [VQAv2](https://visualqa.org/challenge.html) image-text pairs using sequence classification objective. We translate the dataset to three other languages other than English: French, German, and Spanish using the [MarianMT Models](https://huggingface.co/transformers/model_doc/marian.html). This model is based on the VisualBERT which was introduced in [this paper](https://arxiv.org/abs/1908.03557) and first released in [this repository](https://github.com/uclanlp/visualbert). The output is 3129 class logits, the same classes as used by VisualBERT authors. The initial weights are loaded from the Conceptual-12M 60k [checkpoints](https://huggingface.co/flax-community/clip-vision-bert-cc12m-60k). We trained the CLIP-Vision-BERT VQA model during community week hosted by Huggingface 🤗 using JAX/Flax. ## Model description CLIP-Vision-BERT is a modified BERT model which takes in visual embeddings from the CLIP-Vision transformer and concatenates them with BERT textual embeddings before passing them to the self-attention layers of BERT. This is done for deep cross-modal interaction between the two modes. ## Intended uses & limitations❗️ This model is fine-tuned on a multi-translated version of the visual question answering task - [VQA v2](https://visualqa.org/challenge.html). Since VQAv2 is a dataset scraped from the internet, it will involve some biases which will also affect all fine-tuned versions of this model. ### How to use❓ You can use this model directly on visual question answering. You will need to clone the model from [here](https://github.com/gchhablani/multilingual-vqa). An example of usage is shown below: ```python >>> from torchvision.io import read_image >>> import numpy as np >>> import os >>> from transformers import CLIPProcessor, BertTokenizerFast >>> from model.flax_clip_vision_bert.modeling_clip_vision_bert import FlaxCLIPVisionBertForSequenceClassification >>> image_path = os.path.join('images/val2014', os.listdir('images/val2014')[0]) >>> img = read_image(image_path) >>> clip_processor = CLIPProcessor.from_pretrained('openai/clip-vit-base-patch32') ftfy or spacy is not installed using BERT BasicTokenizer instead of ftfy. >>> clip_outputs = clip_processor(images=img) >>> clip_outputs['pixel_values'][0] = clip_outputs['pixel_values'][0].transpose(1,2,0) # Need to transpose images as model expected channel last images. >>> tokenizer = BertTokenizerFast.from_pretrained('bert-base-multilingual-uncased') >>> model = FlaxCLIPVisionBertForSequenceClassification.from_pretrained('flax-community/clip-vision-bert-vqa-ft-6k') >>> text = "Are there teddy bears in the image?" >>> tokens = tokenizer([text], return_tensors="np") >>> pixel_values = np.concatenate([clip_outputs['pixel_values']]) >>> outputs = model(pixel_values=pixel_values, **tokens) >>> preds = outputs.logits[0] >>> sorted_indices = np.argsort(preds)[::-1] # Get reverse sorted scores >>> top_5_indices = sorted_indices[:5] >>> top_5_tokens = list(map(model.config.id2label.get,top_5_indices)) >>> top_5_scores = preds[top_5_indices] >>> print(dict(zip(top_5_tokens, top_5_scores))) {'yes': 15.809224, 'no': 7.8785815, '<unk>': 4.622649, 'very': 4.511462, 'neither': 3.600822} ``` ## Training data 🏋🏻‍♂️ The CLIP-Vision-BERT model was fine-tuned on the translated version of the VQAv2 dataset in four languages using Marian: English, French, German and Spanish. Hence, the dataset is four times the original English questions. The dataset questions and image URLs/paths can be downloaded from [flax-community/multilingual-vqa](https://huggingface.co/datasets/flax-community/multilingual-vqa). ## Data Cleaning 🧹 Though the original dataset contains 443,757 train and 214,354 validation image-question pairs. We only use the `multiple_choice_answer`. The answers which are not present in the 3129 classes are mapped to the `<unk>` label. **Splits** We use the original train-val splits from the VQAv2 dataset. After translation, we get 1,775,028 train image-text pairs, and 857,416 validation image-text pairs. ## Training procedure 👨🏻‍💻 ### Preprocessing The texts are lowercased and tokenized using WordPiece and a shared vocabulary size of approximately 110,000. The beginning of a new document is marked with `[CLS]` and the end of one by `[SEP]`. ### Fine-tuning The checkpoint of the model was trained on Google Cloud Engine TPUv3-8 machine (with 335 GB of RAM, 1000 GB of hard drive, 96 CPU cores) **8 v3 TPU cores** for 6k steps with a per device batch size of 128 and a max sequence length of 128. The optimizer used is AdamW with a learning rate of 5e-5, learning rate warmup for 1600 steps, and linear decay of the learning rate after. We tracked experiments using TensorBoard. Here is link to main dashboard: [CLIP Vision BERT VQAv2 Fine-tuning Dashboard](https://huggingface.co/flax-community/multilingual-vqa-pt-60k-ft/tensorboard) #### **Fine-tuning Results 📊** The model at this checkpoint reached **eval accuracy of 0.49** on our multilingual VQAv2 dataset. ## Team Members - Gunjan Chhablani [@gchhablani](https://hf.co/gchhablani) - Bhavitvya Malik[@bhavitvyamalik](https://hf.co/bhavitvyamalik) ## Acknowledgements We thank [Nilakshan Kunananthaseelan](https://huggingface.co/knilakshan20) for helping us whenever he could get a chance. We also thank [Abheesht Sharma](https://huggingface.co/abheesht) for helping in the discussions in the initial phases. [Luke Melas](https://github.com/lukemelas) helped us get the CC-12M data on our TPU-VMs and we are very grateful to him. This project would not be possible without the help of [Patrick](https://huggingface.co/patrickvonplaten) and [Suraj](https://huggingface.co/valhalla) who met with us frequently and helped review our approach and guided us throughout the project. Huge thanks to Huggingface 🤗 & Google Jax/Flax team for such a wonderful community week and for answering our queries on the Slack channel, and for providing us with the TPU-VMs. <img src=https://pbs.twimg.com/media/E443fPjX0AY1BsR.jpg:large>
null
Non_BioNLP
# CLIP-Vision-BERT Multilingual VQA Model Fine-tuned CLIP-Vision-BERT on translated [VQAv2](https://visualqa.org/challenge.html) image-text pairs using sequence classification objective. We translate the dataset to three other languages other than English: French, German, and Spanish using the [MarianMT Models](https://huggingface.co/transformers/model_doc/marian.html). This model is based on the VisualBERT which was introduced in [this paper](https://arxiv.org/abs/1908.03557) and first released in [this repository](https://github.com/uclanlp/visualbert). The output is 3129 class logits, the same classes as used by VisualBERT authors. The initial weights are loaded from the Conceptual-12M 60k [checkpoints](https://huggingface.co/flax-community/clip-vision-bert-cc12m-60k). We trained the CLIP-Vision-BERT VQA model during community week hosted by Huggingface 🤗 using JAX/Flax. ## Model description CLIP-Vision-BERT is a modified BERT model which takes in visual embeddings from the CLIP-Vision transformer and concatenates them with BERT textual embeddings before passing them to the self-attention layers of BERT. This is done for deep cross-modal interaction between the two modes. ## Intended uses & limitations❗️ This model is fine-tuned on a multi-translated version of the visual question answering task - [VQA v2](https://visualqa.org/challenge.html). Since VQAv2 is a dataset scraped from the internet, it will involve some biases which will also affect all fine-tuned versions of this model. ### How to use❓ You can use this model directly on visual question answering. You will need to clone the model from [here](https://github.com/gchhablani/multilingual-vqa). An example of usage is shown below: ```python >>> from torchvision.io import read_image >>> import numpy as np >>> import os >>> from transformers import CLIPProcessor, BertTokenizerFast >>> from model.flax_clip_vision_bert.modeling_clip_vision_bert import FlaxCLIPVisionBertForSequenceClassification >>> image_path = os.path.join('images/val2014', os.listdir('images/val2014')[0]) >>> img = read_image(image_path) >>> clip_processor = CLIPProcessor.from_pretrained('openai/clip-vit-base-patch32') ftfy or spacy is not installed using BERT BasicTokenizer instead of ftfy. >>> clip_outputs = clip_processor(images=img) >>> clip_outputs['pixel_values'][0] = clip_outputs['pixel_values'][0].transpose(1,2,0) # Need to transpose images as model expected channel last images. >>> tokenizer = BertTokenizerFast.from_pretrained('bert-base-multilingual-uncased') >>> model = FlaxCLIPVisionBertForSequenceClassification.from_pretrained('flax-community/clip-vision-bert-vqa-ft-6k') >>> text = "Are there teddy bears in the image?" >>> tokens = tokenizer([text], return_tensors="np") >>> pixel_values = np.concatenate([clip_outputs['pixel_values']]) >>> outputs = model(pixel_values=pixel_values, **tokens) >>> preds = outputs.logits[0] >>> sorted_indices = np.argsort(preds)[::-1] # Get reverse sorted scores >>> top_5_indices = sorted_indices[:5] >>> top_5_tokens = list(map(model.config.id2label.get,top_5_indices)) >>> top_5_scores = preds[top_5_indices] >>> print(dict(zip(top_5_tokens, top_5_scores))) {'yes': 15.809224, 'no': 7.8785815, '<unk>': 4.622649, 'very': 4.511462, 'neither': 3.600822} ``` ## Training data 🏋🏻‍♂️ The CLIP-Vision-BERT model was fine-tuned on the translated version of the VQAv2 dataset in four languages using Marian: English, French, German and Spanish. Hence, the dataset is four times the original English questions. The dataset questions and image URLs/paths can be downloaded from [flax-community/multilingual-vqa](https://huggingface.co/datasets/flax-community/multilingual-vqa). ## Data Cleaning 🧹 Though the original dataset contains 443,757 train and 214,354 validation image-question pairs. We only use the `multiple_choice_answer`. The answers which are not present in the 3129 classes are mapped to the `<unk>` label. **Splits** We use the original train-val splits from the VQAv2 dataset. After translation, we get 1,775,028 train image-text pairs, and 857,416 validation image-text pairs. ## Training procedure 👨🏻‍💻 ### Preprocessing The texts are lowercased and tokenized using WordPiece and a shared vocabulary size of approximately 110,000. The beginning of a new document is marked with `[CLS]` and the end of one by `[SEP]`. ### Fine-tuning The checkpoint of the model was trained on Google Cloud Engine TPUv3-8 machine (with 335 GB of RAM, 1000 GB of hard drive, 96 CPU cores) **8 v3 TPU cores** for 6k steps with a per device batch size of 128 and a max sequence length of 128. The optimizer used is AdamW with a learning rate of 5e-5, learning rate warmup for 1600 steps, and linear decay of the learning rate after. We tracked experiments using TensorBoard. Here is link to main dashboard: [CLIP Vision BERT VQAv2 Fine-tuning Dashboard](https://huggingface.co/flax-community/multilingual-vqa-pt-60k-ft/tensorboard) #### **Fine-tuning Results 📊** The model at this checkpoint reached **eval accuracy of 0.49** on our multilingual VQAv2 dataset. ## Team Members - Gunjan Chhablani [@gchhablani](https://hf.co/gchhablani) - Bhavitvya Malik[@bhavitvyamalik](https://hf.co/bhavitvyamalik) ## Acknowledgements We thank [Nilakshan Kunananthaseelan](https://huggingface.co/knilakshan20) for helping us whenever he could get a chance. We also thank [Abheesht Sharma](https://huggingface.co/abheesht) for helping in the discussions in the initial phases. [Luke Melas](https://github.com/lukemelas) helped us get the CC-12M data on our TPU-VMs and we are very grateful to him. This project would not be possible without the help of [Patrick](https://huggingface.co/patrickvonplaten) and [Suraj](https://huggingface.co/valhalla) who met with us frequently and helped review our approach and guided us throughout the project. Huge thanks to Huggingface 🤗 & Google Jax/Flax team for such a wonderful community week and for answering our queries on the Slack channel, and for providing us with the TPU-VMs. <img src=https://pbs.twimg.com/media/E443fPjX0AY1BsR.jpg:large>
{}
task
[ "QUESTION_ANSWERING", "TRANSLATION" ]
46,069
rambodazimi/bert-base-uncased-finetuned-LoRA-QNLI
rambodazimi
null
[ "safetensors", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "region:us" ]
2024-08-23T03:01:20Z
2024-08-28T14:10:01+00:00
0
0
--- datasets: - glue license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: bert-base-uncased-finetuned-LoRA-QNLI results: - task: type: text-classification name: Text Classification dataset: name: glue type: glue args: qnli metrics: - type: accuracy value: 0.9114039904814205 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-lora-qnli This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Accuracy: 0.9114 - trainable model parameters: 1181186 - all model parameters: 110664964 - percentage of trainable model parameters: 1.07% ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-04 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - weight_decay: 0.01 - rank: 32 - lora_alpha: 32 - lora_dropout: 0.05 - num_epochs: 4
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-lora-qnli This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Accuracy: 0.9114 - trainable model parameters: 1181186 - all model parameters: 110664964 - percentage of trainable model parameters: 1.07% ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-04 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - weight_decay: 0.01 - rank: 32 - lora_alpha: 32 - lora_dropout: 0.05 - num_epochs: 4
{"datasets": ["glue"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-base-uncased-finetuned-LoRA-QNLI", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "qnli"}, "metrics": [{"type": "accuracy", "value": 0.9114039904814205, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,070
PeterBanning71/t5-small-finetuned-tfg
PeterBanning71
summarization
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "summarization", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-03-25T12:30:46Z
2023-03-25T13:05:06+00:00
17
0
--- license: apache-2.0 tags: - summarization - generated_from_trainer model-index: - name: t5-small-finetuned-tfg results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-tfg This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 5 | 8.2836 | 4.1934 | 0.0 | 3.7869 | 4.1934 | 19.0 | ### Framework versions - Transformers 4.27.3 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-tfg This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 5 | 8.2836 | 4.1934 | 0.0 | 3.7869 | 4.1934 | 19.0 | ### Framework versions - Transformers 4.27.3 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
{"license": "apache-2.0", "tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "t5-small-finetuned-tfg", "results": []}]}
task
[ "SUMMARIZATION" ]
46,071
FuseAI/FuseChat-Llama-3.2-1B-Instruct
FuseAI
null
[ "safetensors", "llama", "dataset:FuseAI/FuseChat-3.0-DPO-Data", "arxiv:2412.03187", "arxiv:2408.07990", "region:us" ]
2024-11-26T08:50:59Z
2025-02-07T09:50:23+00:00
43
4
--- datasets: - FuseAI/FuseChat-3.0-DPO-Data --- <p align="center" width="100%"> </p> <div id="top" align="center"> FuseChat-3.0: Preference Optimization for Implicit Model Fusion ----------------------------- <h4> |<a href="https://arxiv.org/abs/2412.03187"> 📑 Paper </a> | <a href="https://github.com/SLIT-AI/FuseChat-3.0"> 🐱 GitHub Repo </a> | <a href="https://huggingface.co/FuseAI"> 🤗 Hugging Face </a> | <a href="https://slit-ai.github.io/FuseChat-3.0/"> 🌐 Website </a> | </h4> </div> <div align="center"> <img src="FuseChat-3.0.png" width=70%/> </div> We present FuseChat-3.0, a series of models crafted to enhance performance by integrating the strengths of multiple source LLMs into more compact target LLMs. To achieve this fusion, we utilized four powerful source LLMs: Gemma-2-27B-It, Mistral-Large-Instruct-2407, Qwen-2.5-72B-Instruct, and Llama-3.1-70B-Instruct. For the target LLMs, we employed three widely-used smaller models—Llama-3.1-8B-Instruct, Gemma-2-9B-It, and Qwen-2.5-7B-Instruct—along with two even more compact models—Llama-3.2-3B-Instruct and Llama-3.2-1B-Instruct. The implicit model fusion process involves a two-stage training pipeline comprising Supervised Fine-Tuning (SFT) to mitigate distribution discrepancies between target and source LLMs, and Direct Preference Optimization (DPO) for learning preferences from multiple source LLMs. The resulting FuseChat-3.0 models demonstrated substantial improvements in tasks related to general conversation, instruction following, mathematics, and coding. Notably, when Llama-3.1-8B-Instruct served as the target LLM, our fusion approach achieved an average improvement of 6.8 points across 14 benchmarks. Moreover, it showed significant improvements of 37.1 and 30.1 points on instruction-following test sets AlpacaEval-2 and Arena-Hard respectively. We have released the [FuseChat-3.0](https://huggingface.co/FuseAI) models and datasets on Huggingface. ## Overview Combining the strengths of multiple large language models (LLMs) represents a promising approach to enhance individual model capabilities. Model fusion is a technique that integrates the strengths of robust source LLMs into a target LLM. Previous iterations of the [FuseChat](https://arxiv.org/abs/2408.07990) series employed probabilistic distribution matrices generated by source models to transfer knowledge to target models. We refer to this method as **explicit model fusion (EMF)** because it involves a well-defined knowledge transfer process. While applicable to models with varying architectures and sizes, and without increasing memory overhead during inference, this approach presents notable challenges such as vocabulary alignment and the merging of distribution matrices from different LLMs. These issues complicate model fusion, reduce its efficiency, and may introduce noise and errors and affect the fusion results. FuseChat-3.0, however, takes a different approach by enhancing a single LLM through implicit learning from robust open-source LLMs, a process we term **implicit model fusion (IMF)**. The concept of IMF has been widely utilized to improve the performance of weaker models. For instance, a weak model can be boosted through fine-tuning with outputs from stronger LLMs. Moreover, a reward model can be trained using outputs from various LLMs, enabling it to learn and capture the differences in capabilities between the LLMs. Zephyr further collects responses from multiple LLMs and ranks them with GPT-4 to obtain preference data for training the policy. Inspired by recent alignment techniques, we propose an IMF method to transfer the capabilities of source LLMs to a target LLM through preference optimization. Our IMF method follows a three-stage process aimed at effectively transferring capabilities from source LLMs to a target LLM. First, during **dataset construction**, we sample N responses from each of the source LLMs and annotate these responses using an external reward model. Second, in the **supervised fine-tuning (SFT)** stage, we fine-tune the target model using the best responses, which not only enhances the target model's capabilities but also helps mitigate the distributional gap between the source and target models. Finally, in the **direct preference optimization (DPO)** stage, we optimize the target model by using the best and worst responses from the source models as preference pairs, further enhancing the target model's performance. The complete pipeline will be detailed in the following paragraph. ## Dataset ### Prompt Selection Our datasets were designed to enhance model's instruction following, general conversation, mathematics, coding, and Chinese-language capabilities. We selected data from open-source community datasets, applying targeted filtering and preprocessing. Key datasets and filtering criteria included: - **Instruction Following & General Conversation**: Sourced from [UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback), [Magpie-Pro-DPO-100K-v0.1](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-DPO-100K-v0.1), and [HelpSteer2](https://huggingface.co/datasets/nvidia/HelpSteer2), excluding code and math data. - **Mathematics**: Selected from [OpenMathInstruct-2](https://huggingface.co/datasets/nvidia/OpenMathInstruct-2), with nearly 52,000 unique samples. - **Coding**: Curated from [leetcode](https://huggingface.co/datasets/greengerong/leetcode) and [self-oss-instruct-sc2-exec-filter-50k](https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-exec-filter-50k), retaining prompts with test cases. - **Chinese Language**: Integrated [alpaca_gpt4_zh](https://huggingface.co/datasets/llamafactory/alpaca_gpt4_zh) and [Magpie-Qwen2-Pro-200K-Chinese](https://huggingface.co/datasets/Magpie-Align/Magpie-Qwen2-Pro-200K-Chinese), filtering out code and math prompts to retain approximately 10,000 high-quality samples. ### Response Sampling For each dataset's prompts, we synthesized responses mainly from four different series of source models, specifically [Gemma-2-27b-It](https://huggingface.co/google/gemma-2-27b-it), [Mistral-Large-Instruct-2407](https://huggingface.co/mistralai/Mistral-Large-Instruct-2407), [Qwen-2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct), and [Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct). - **Instruction Following & General Conversation**: We sampled each prompt five times from all the source models. - **Mathematics**: We retained the responses generated by Llama-3.1-405B-Instruct from the original dataset (OpenMathInstruct-2) and additionally sampled responses using [Qwen-2.5-Math-72B-Instruct](https://huggingface.co/Qwen/Qwen-2.5-Math-72B-Instruct). - **Coding**: We sampled each prompt eight times for all source models. - **Chinese Language**: We included single response sampled exclusively from Qwen-2.5-72B-Instruct. The sampling parameters for different models are detailed in Table below. <table class="js-sort-table table hidden"> <tr> <td class="js-sort-string"><strong>Source LLMs</strong></td> <td class="js-sort-string"><strong>Sampling Params</strong></td> </tr> <tr> <td>Gemma-2-27b-It</td> <td>Temp 0.8 Top-p 0.95</td> </tr> <tr> <td>Mistral-Large-Instruct-2407</td> <td>Temp 0.8 Top-p 0.95</td> </tr> <tr> <td>Qwen-2.5-(Math)-72B-Instruct</td> <td>Temp 0.7 Top-p 0.8 Repetition penalty 1.05</td> </tr> <tr> <td>Llama-3.1-70B-Instruct</td> <td>Temp 0.8 Top-p 0.95</td> </tr> </table> ### Data Construction Unlike the original approach in [WRPO](https://arxiv.org/abs/2412.03187), which constructs preference pairs from target model responses and treats source model responses as additional positive samples, our research in mathematics and coding domains revealed that sampling from multiple source models yields more and higher-quality preference pair data. Based on this insight, FuseChat-3.0 leverages the best and worst response pairs generated by source models as preference pairs to optimize the target model. This refined approach not only preserves the core advantages of implicit model fusion but also results in a more streamlined and practical implementation, making it particularly well-suited for real-world applications within the open-source community. - **Instruction Following**: To assign RM scores to the five responses generated by each source model, we employed [ArmoRM](https://huggingface.co/RLHFlow/ArmoRM-Llama3-8B-v0.1) for annotation. We then divided the annotated data into SFT and DPO datasets using a 4:6 ratio. For the SFT phase, we selected the responses with the highest RM scores. During the DPO phase, we paired responses from the same source model, designating those with the highest RM scores as positive samples and those with the lowest RM scores as negative samples. We ensured that the RM score difference between the positive and negative samples in each pair ranged from 0.01 to 0.1. - **Mathematics**: We initially annotated the responses from all source models for correctness by comparing them with the gold labels and evaluating them using the RM scores provided by ArmoRM. We then strategically divided the dataset into SFT phase and DPO phase. In the SFT phase, we incorporated responses that were correct and had the highest RM scores. This selection ensured that the fine-tuning process was based on high-quality responses that aligned closely with the desired outcomes. For the DPO phase, we constructed paired samples from the same source model. The positive samples consisted of correct answers with the highest RM scores, while the negative samples were incorrect answers with the lowest RM scores. To ensure meaningful comparisons during optimization, we maintained an RM score differential between positive and negative pairs within the range of 0.01 to 0.1. - **Coding**: We employed a dual-scoring system comprising correctness scores and RM scores for coding evaluation. The correctness scores assessed whether the code passed both static analysis and test cases, ensuring functional accuracy. The RM scores were used for preference evaluation, gauging the quality of responses based on predefined criteria. During the SFT phase, we included responses that not only passed all test cases but also achieved the highest RM scores. This selection ensured that the model was fine-tuned on exemplary code that met both correctness and preference standards. In the DPO phase, we contrasted positive samples—high-scoring responses that passed the tests—with negative samples—low-scoring responses that failed the tests. This comparison aimed to optimize the model's ability to prefer higher-quality code during training. We excluded any instances where all model responses failed to meet the testing criteria. This exclusion was necessary to maintain the integrity of the evaluation process, as such cases did not provide meaningful data for assessing and improving the model's performance. - **Chinese**: We exclusively utilized responses sampled from Qwen-2.5-72B-Instruct during the SFT phase, due to its strong performance in the Chinese language. Our final dataset comprised 158,667 total entries, with 94,539 entries for the SFT phase and 64,128 preference pairs for the DPO phase. The overall composition of the datasets is shown below. <table class="js-sort-table table hidden"> <tr> <td class="js-sort-string"><strong>Dataset</strong></td> <td class="js-sort-number"><strong>Total Count</strong></td> <td class="js-sort-number"><strong>SFT Count</strong></td> <td class="js-sort-number"><strong>DPO Count</strong></td> <td class="js-sort-string"><strong>Category</strong></td> </tr> <tr> <td><a href="https://huggingface.co/datasets/openbmb/UltraFeedback" target="_blank">UltraFeedback</a></td> <td>51098</td> <td>20439</td> <td>30659</td> <td>Instruction following</td> </tr> <tr> <td><a href="https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-DPO-100K-v0.1" target="_blank">Magpie-Pro-DPO</a></td> <td>20374</td> <td>8149</td> <td>12225</td> <td>Instruction following</td> </tr> <tr> <td><a href="https://huggingface.co/datasets/nvidia/HelpSteer2" target="_blank">HelpSteer2</a></td> <td>9435</td> <td>3774</td> <td>5661</td> <td>Instruction following</td> </tr> <tr> <td><a href="https://huggingface.co/datasets/nvidia/OpenMathInstruct-2" target="_blank">OpenMathInstruct-2</a></td> <td>51803</td> <td>40188</td> <td>11615</td> <td>Mathematics</td> </tr> <tr> <td><a href="https://huggingface.co/datasets/greengerong/leetcode" target="_blank">leetcode</a></td> <td>3113</td> <td>1877</td> <td>1236</td> <td>Coding</td> </tr> <tr> <td><a href="https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-exec-filter-50k" target="_blank">self-oss-instruct-sc2</a></td> <td>12892</td> <td>10160</td> <td>2732</td> <td>Coding</td> </tr> <tr> <td><a href="https://huggingface.co/datasets/llamafactory/alpaca_gpt4_zh" target="_blank">alpaca_gpt4_zh</a></td> <td>2471</td> <td>2471</td> <td>0</td> <td>Chinese Language</td> </tr> <tr> <td><a href="https://huggingface.co/datasets/Magpie-Align/Magpie-Qwen2-Pro-200K-Chinese" target="_blank">Magpie-Qwen2-Pro</a></td> <td>7481</td> <td>7481</td> <td>0</td> <td>Chinese Language</td> </tr> <tr> <td><strong>Total</strong></td> <td>158667</td> <td>94539</td> <td>64128</td> <td>All</td> </tr> </table> ## Training The implicit model fusion process involves a two-stage training pipeline comprising Supervised Fine-Tuning (SFT) to mitigate distribution discrepancies between target and source LLMs, and Direct Preference Optimization (DPO) for learning preferences from multiple source LLMs. ### SFT We used [Llama-Factory](https://github.com/hiyouga/LLaMA-Factory) as our fine-tuning library. For all target models, we fine-tuned for 3 epochs, with a batch size of 128 and a maximum sequence length of 2048 tokens. A cosine learning rate schedule with a warmup ratio of 0.1 is employed. Different models' learning rates are shown in the table below. <table class="js-sort-table table hidden"> <tr> <td class="js-sort-string"><strong>Target Models</strong></td> <td class="js-sort-string"><strong>Learning rate</strong></td> </tr> <tr> <td>Llama-3.1-8B-Instruct</td> <td>5e-6</td> </tr> <tr> <td>Qwen-2.5-7B-Instruct</td> <td>2e-6</td> </tr> <tr> <td>Gemma-2-9B-It</td> <td>2e-6</td> </tr> <tr> <td>Llama-3.2-(1/3)B-Instruct</td> <td>5e-6</td> </tr> </table> ### DPO We used [alignment-handbook](https://github.com/huggingface/alignment-handbook) as our DPO training library. For all Target SFT models, we trained for 1 epoch, set maximum sequence length to 2048, used cosine learning rate with a warmup ratio of 0.1. We saved checkpoints every 100 steps and selected the best from the last two checkpoints. For Llama-3.1 and Llama-3.2 series models, we introduced length normalization in DPO training, as shown in the formula below. ![Length Normalized DPO Formula](https://latex.codecogs.com/svg.image?\mathcal{L}_{\text{LN-DPO}}=-\log\sigma\left(\frac{\beta}{|y_w|}\log\frac{\pi_\theta(y_w|x)}{\pi_{\text{ref}}(y_w|x)}-\frac{\beta}{|y_l|}\log\frac{\pi_\theta(y_l|x)}{\pi_{\text{ref}}(y_l|x)}\right)) Different models' hyperparameters are shown in the table below. <table class="js-sort-table table hidden"> <tr> <td class="js-sort-string"><strong>Target SFT Models</strong></td> <td class="js-sort-string"><strong>Learning rate</strong></td> <td class="js-sort-string"><strong>β</strong></td> <td class="js-sort-string"><strong>Length normalize</strong></td> </tr> <tr> <td>FuseChat-Llama-3.1-8B-SFT</td> <td>8e-7</td> <td>10</td> <td>Yes</td> </tr> <tr> <td>FuseChat-Qwen-2.5-7B-SFT</td> <td>3e-7</td> <td>0.01</td> <td>No</td> </tr> <tr> <td>FuseChat-Gemma-2-9B-SFT</td> <td>5e-7</td> <td>0.01</td> <td>No</td> </tr> <tr> <td>FuseChat-Llama-3.2-(1/3)B-SFT</td> <td>1e-6</td> <td>10</td> <td>Yes</td> </tr> </table> ## Evaluation The evaluation of instruction-tuned models mainly focuses on the model performance of instruction following, natural language understanding, general question answering, reasoning, mathematics, coding, etc. For the evaluation of FuseChat-3.0, we include 14 benchmarks and organize them into four categories: - **Instruction Following** Tasks: AlpacaEval-2, Arena-Hard, MTbench, AlignBench v1.1 (Chinese). - **General** Tasks: LiveBench-0831, MMLU-Pro, MMLU-redux, GPQA-Diamond. - **Mathematics** Tasks: GSM8K, MATH, AMC 23. - **Coding** Tasks: HumanEval, MBPP, LiveCodeBench 2408-2411. We include more details and release our evaluation code at [FuseEval](https://github.com/SLIT-AI/FuseChat-3.0/FuseEval). The evaluation results of five series fused models are as follows, showing that our FuseChat-3.0 models achieved varying degrees of improvement across different target models. When selecting Llama-3.1-8B-Instruct as the target model, our fusion model **FuseChat-Llama-3.1-8B-Instruct achieved an average performance improvement of 6.8 points across 14 benchmarks. Notably, it showed significant improvements of 37.1 and 30.1 points on instruction-following test sets AlpacaEval-2 and Arena-Hard respectively**. Additionally, FuseChat-Llama-3.1-8B-Instruct outperformed AllenAI's recently released Llama-3.1-Tulu-3-8B model on all benchmarks except GSM8K and GPQA-Diamond. All these results demonstrate the effectiveness and success of FuseChat-3.0. ### FuseChat-Llama-3.2-1B-Instruct Performance <table class="js-sort-table table hidden"> <tr> <td class="js-sort-string"><strong>Benchmarks</strong></td> <td class="js-sort-string"><strong>Llama-3.2-1B-Instruct</strong></td> <td class="js-sort-string"><strong>FuseChat-Llama-3.2-1B-SFT</strong></td> <td class="js-sort-string"><strong>FuseChat-Llama-3.2-1B-Instruct</strong></td> </tr> <tr> <td style="white-space: nowrap;">AlpacaEval-2 (LC %)</td> <td>9.7</td> <td>14.0</td> <td><strong>25.3</strong></td> </tr> <tr> <td>Arena-Hard (WR %)</td> <td>5.1</td> <td>6.0</td> <td><strong>8.6</strong></td> </tr> <tr> <td>MT-Bench</td> <td>4.7</td> <td>5.2</td> <td><strong>5.7</strong></td> </tr> <tr> <td>AlignBench v1.1</td> <td>2.9</td> <td>3.9</td> <td><strong>4.3</strong></td> </tr> <tr> <td>GSM8K</td> <td>46.3</td> <td><strong>55.6</strong></td> <td>54.5</td> </tr> <tr> <td>MATH</td> <td>32.7</td> <td><strong>34.7</strong></td> <td>33.6</td> </tr> <tr> <td>AMC23</td> <td>17.5</td> <td>15.0</td> <td><strong>20.0</strong></td> </tr> <tr> <td>LiveBench 0831</td> <td>14.0</td> <td>13.9</td> <td><strong>15.8</strong></td> </tr> <tr> <td>MMLU-Pro</td> <td><strong>22.3</strong></td> <td>21.5</td> <td>21.3</td> </tr> <tr> <td>MMLU</td> <td><strong>45.8</strong></td> <td>45.0</td> <td>44.8</td> </tr> <tr> <td>GPQA-Diamond</td> <td>21.2</td> <td><strong>25.3</strong></td> <td>24.2</td> </tr> <tr> <td>HumanEval</td> <td>39.6</td> <td>36.6</td> <td><strong>40.2</strong></td> </tr> <tr> <td>MBPP</td> <td><strong>49.5</strong></td> <td>42.1</td> <td>46.6</td> </tr> <tr> <td>Average</td> <td>24.0</td> <td>24.5</td> <td><strong>26.5</strong></td> </tr> </table> ## Citation ``` @inproceedings{yang2025weightedreward, title={Weighted-Reward Preference Optimization for Implicit Model Fusion}, author={Ziyi Yang and Fanqi Wan and Longguang Zhong and Tianyuan Shi and Xiaojun Quan}, booktitle={The Thirteenth International Conference on Learning Representations}, year={2025}, url={https://openreview.net/forum?id=fq24pEb8SL} } ```
null
Non_BioNLP
<p align="center" width="100%"> </p> <div id="top" align="center"> FuseChat-3.0: Preference Optimization for Implicit Model Fusion ----------------------------- <h4> |<a href="https://arxiv.org/abs/2412.03187"> 📑 Paper </a> | <a href="https://github.com/SLIT-AI/FuseChat-3.0"> 🐱 GitHub Repo </a> | <a href="https://huggingface.co/FuseAI"> 🤗 Hugging Face </a> | <a href="https://slit-ai.github.io/FuseChat-3.0/"> 🌐 Website </a> | </h4> </div> <div align="center"> <img src="FuseChat-3.0.png" width=70%/> </div> We present FuseChat-3.0, a series of models crafted to enhance performance by integrating the strengths of multiple source LLMs into more compact target LLMs. To achieve this fusion, we utilized four powerful source LLMs: Gemma-2-27B-It, Mistral-Large-Instruct-2407, Qwen-2.5-72B-Instruct, and Llama-3.1-70B-Instruct. For the target LLMs, we employed three widely-used smaller models—Llama-3.1-8B-Instruct, Gemma-2-9B-It, and Qwen-2.5-7B-Instruct—along with two even more compact models—Llama-3.2-3B-Instruct and Llama-3.2-1B-Instruct. The implicit model fusion process involves a two-stage training pipeline comprising Supervised Fine-Tuning (SFT) to mitigate distribution discrepancies between target and source LLMs, and Direct Preference Optimization (DPO) for learning preferences from multiple source LLMs. The resulting FuseChat-3.0 models demonstrated substantial improvements in tasks related to general conversation, instruction following, mathematics, and coding. Notably, when Llama-3.1-8B-Instruct served as the target LLM, our fusion approach achieved an average improvement of 6.8 points across 14 benchmarks. Moreover, it showed significant improvements of 37.1 and 30.1 points on instruction-following test sets AlpacaEval-2 and Arena-Hard respectively. We have released the [FuseChat-3.0](https://huggingface.co/FuseAI) models and datasets on Huggingface. ## Overview Combining the strengths of multiple large language models (LLMs) represents a promising approach to enhance individual model capabilities. Model fusion is a technique that integrates the strengths of robust source LLMs into a target LLM. Previous iterations of the [FuseChat](https://arxiv.org/abs/2408.07990) series employed probabilistic distribution matrices generated by source models to transfer knowledge to target models. We refer to this method as **explicit model fusion (EMF)** because it involves a well-defined knowledge transfer process. While applicable to models with varying architectures and sizes, and without increasing memory overhead during inference, this approach presents notable challenges such as vocabulary alignment and the merging of distribution matrices from different LLMs. These issues complicate model fusion, reduce its efficiency, and may introduce noise and errors and affect the fusion results. FuseChat-3.0, however, takes a different approach by enhancing a single LLM through implicit learning from robust open-source LLMs, a process we term **implicit model fusion (IMF)**. The concept of IMF has been widely utilized to improve the performance of weaker models. For instance, a weak model can be boosted through fine-tuning with outputs from stronger LLMs. Moreover, a reward model can be trained using outputs from various LLMs, enabling it to learn and capture the differences in capabilities between the LLMs. Zephyr further collects responses from multiple LLMs and ranks them with GPT-4 to obtain preference data for training the policy. Inspired by recent alignment techniques, we propose an IMF method to transfer the capabilities of source LLMs to a target LLM through preference optimization. Our IMF method follows a three-stage process aimed at effectively transferring capabilities from source LLMs to a target LLM. First, during **dataset construction**, we sample N responses from each of the source LLMs and annotate these responses using an external reward model. Second, in the **supervised fine-tuning (SFT)** stage, we fine-tune the target model using the best responses, which not only enhances the target model's capabilities but also helps mitigate the distributional gap between the source and target models. Finally, in the **direct preference optimization (DPO)** stage, we optimize the target model by using the best and worst responses from the source models as preference pairs, further enhancing the target model's performance. The complete pipeline will be detailed in the following paragraph. ## Dataset ### Prompt Selection Our datasets were designed to enhance model's instruction following, general conversation, mathematics, coding, and Chinese-language capabilities. We selected data from open-source community datasets, applying targeted filtering and preprocessing. Key datasets and filtering criteria included: - **Instruction Following & General Conversation**: Sourced from [UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback), [Magpie-Pro-DPO-100K-v0.1](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-DPO-100K-v0.1), and [HelpSteer2](https://huggingface.co/datasets/nvidia/HelpSteer2), excluding code and math data. - **Mathematics**: Selected from [OpenMathInstruct-2](https://huggingface.co/datasets/nvidia/OpenMathInstruct-2), with nearly 52,000 unique samples. - **Coding**: Curated from [leetcode](https://huggingface.co/datasets/greengerong/leetcode) and [self-oss-instruct-sc2-exec-filter-50k](https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-exec-filter-50k), retaining prompts with test cases. - **Chinese Language**: Integrated [alpaca_gpt4_zh](https://huggingface.co/datasets/llamafactory/alpaca_gpt4_zh) and [Magpie-Qwen2-Pro-200K-Chinese](https://huggingface.co/datasets/Magpie-Align/Magpie-Qwen2-Pro-200K-Chinese), filtering out code and math prompts to retain approximately 10,000 high-quality samples. ### Response Sampling For each dataset's prompts, we synthesized responses mainly from four different series of source models, specifically [Gemma-2-27b-It](https://huggingface.co/google/gemma-2-27b-it), [Mistral-Large-Instruct-2407](https://huggingface.co/mistralai/Mistral-Large-Instruct-2407), [Qwen-2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct), and [Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct). - **Instruction Following & General Conversation**: We sampled each prompt five times from all the source models. - **Mathematics**: We retained the responses generated by Llama-3.1-405B-Instruct from the original dataset (OpenMathInstruct-2) and additionally sampled responses using [Qwen-2.5-Math-72B-Instruct](https://huggingface.co/Qwen/Qwen-2.5-Math-72B-Instruct). - **Coding**: We sampled each prompt eight times for all source models. - **Chinese Language**: We included single response sampled exclusively from Qwen-2.5-72B-Instruct. The sampling parameters for different models are detailed in Table below. <table class="js-sort-table table hidden"> <tr> <td class="js-sort-string"><strong>Source LLMs</strong></td> <td class="js-sort-string"><strong>Sampling Params</strong></td> </tr> <tr> <td>Gemma-2-27b-It</td> <td>Temp 0.8 Top-p 0.95</td> </tr> <tr> <td>Mistral-Large-Instruct-2407</td> <td>Temp 0.8 Top-p 0.95</td> </tr> <tr> <td>Qwen-2.5-(Math)-72B-Instruct</td> <td>Temp 0.7 Top-p 0.8 Repetition penalty 1.05</td> </tr> <tr> <td>Llama-3.1-70B-Instruct</td> <td>Temp 0.8 Top-p 0.95</td> </tr> </table> ### Data Construction Unlike the original approach in [WRPO](https://arxiv.org/abs/2412.03187), which constructs preference pairs from target model responses and treats source model responses as additional positive samples, our research in mathematics and coding domains revealed that sampling from multiple source models yields more and higher-quality preference pair data. Based on this insight, FuseChat-3.0 leverages the best and worst response pairs generated by source models as preference pairs to optimize the target model. This refined approach not only preserves the core advantages of implicit model fusion but also results in a more streamlined and practical implementation, making it particularly well-suited for real-world applications within the open-source community. - **Instruction Following**: To assign RM scores to the five responses generated by each source model, we employed [ArmoRM](https://huggingface.co/RLHFlow/ArmoRM-Llama3-8B-v0.1) for annotation. We then divided the annotated data into SFT and DPO datasets using a 4:6 ratio. For the SFT phase, we selected the responses with the highest RM scores. During the DPO phase, we paired responses from the same source model, designating those with the highest RM scores as positive samples and those with the lowest RM scores as negative samples. We ensured that the RM score difference between the positive and negative samples in each pair ranged from 0.01 to 0.1. - **Mathematics**: We initially annotated the responses from all source models for correctness by comparing them with the gold labels and evaluating them using the RM scores provided by ArmoRM. We then strategically divided the dataset into SFT phase and DPO phase. In the SFT phase, we incorporated responses that were correct and had the highest RM scores. This selection ensured that the fine-tuning process was based on high-quality responses that aligned closely with the desired outcomes. For the DPO phase, we constructed paired samples from the same source model. The positive samples consisted of correct answers with the highest RM scores, while the negative samples were incorrect answers with the lowest RM scores. To ensure meaningful comparisons during optimization, we maintained an RM score differential between positive and negative pairs within the range of 0.01 to 0.1. - **Coding**: We employed a dual-scoring system comprising correctness scores and RM scores for coding evaluation. The correctness scores assessed whether the code passed both static analysis and test cases, ensuring functional accuracy. The RM scores were used for preference evaluation, gauging the quality of responses based on predefined criteria. During the SFT phase, we included responses that not only passed all test cases but also achieved the highest RM scores. This selection ensured that the model was fine-tuned on exemplary code that met both correctness and preference standards. In the DPO phase, we contrasted positive samples—high-scoring responses that passed the tests—with negative samples—low-scoring responses that failed the tests. This comparison aimed to optimize the model's ability to prefer higher-quality code during training. We excluded any instances where all model responses failed to meet the testing criteria. This exclusion was necessary to maintain the integrity of the evaluation process, as such cases did not provide meaningful data for assessing and improving the model's performance. - **Chinese**: We exclusively utilized responses sampled from Qwen-2.5-72B-Instruct during the SFT phase, due to its strong performance in the Chinese language. Our final dataset comprised 158,667 total entries, with 94,539 entries for the SFT phase and 64,128 preference pairs for the DPO phase. The overall composition of the datasets is shown below. <table class="js-sort-table table hidden"> <tr> <td class="js-sort-string"><strong>Dataset</strong></td> <td class="js-sort-number"><strong>Total Count</strong></td> <td class="js-sort-number"><strong>SFT Count</strong></td> <td class="js-sort-number"><strong>DPO Count</strong></td> <td class="js-sort-string"><strong>Category</strong></td> </tr> <tr> <td><a href="https://huggingface.co/datasets/openbmb/UltraFeedback" target="_blank">UltraFeedback</a></td> <td>51098</td> <td>20439</td> <td>30659</td> <td>Instruction following</td> </tr> <tr> <td><a href="https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-DPO-100K-v0.1" target="_blank">Magpie-Pro-DPO</a></td> <td>20374</td> <td>8149</td> <td>12225</td> <td>Instruction following</td> </tr> <tr> <td><a href="https://huggingface.co/datasets/nvidia/HelpSteer2" target="_blank">HelpSteer2</a></td> <td>9435</td> <td>3774</td> <td>5661</td> <td>Instruction following</td> </tr> <tr> <td><a href="https://huggingface.co/datasets/nvidia/OpenMathInstruct-2" target="_blank">OpenMathInstruct-2</a></td> <td>51803</td> <td>40188</td> <td>11615</td> <td>Mathematics</td> </tr> <tr> <td><a href="https://huggingface.co/datasets/greengerong/leetcode" target="_blank">leetcode</a></td> <td>3113</td> <td>1877</td> <td>1236</td> <td>Coding</td> </tr> <tr> <td><a href="https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-exec-filter-50k" target="_blank">self-oss-instruct-sc2</a></td> <td>12892</td> <td>10160</td> <td>2732</td> <td>Coding</td> </tr> <tr> <td><a href="https://huggingface.co/datasets/llamafactory/alpaca_gpt4_zh" target="_blank">alpaca_gpt4_zh</a></td> <td>2471</td> <td>2471</td> <td>0</td> <td>Chinese Language</td> </tr> <tr> <td><a href="https://huggingface.co/datasets/Magpie-Align/Magpie-Qwen2-Pro-200K-Chinese" target="_blank">Magpie-Qwen2-Pro</a></td> <td>7481</td> <td>7481</td> <td>0</td> <td>Chinese Language</td> </tr> <tr> <td><strong>Total</strong></td> <td>158667</td> <td>94539</td> <td>64128</td> <td>All</td> </tr> </table> ## Training The implicit model fusion process involves a two-stage training pipeline comprising Supervised Fine-Tuning (SFT) to mitigate distribution discrepancies between target and source LLMs, and Direct Preference Optimization (DPO) for learning preferences from multiple source LLMs. ### SFT We used [Llama-Factory](https://github.com/hiyouga/LLaMA-Factory) as our fine-tuning library. For all target models, we fine-tuned for 3 epochs, with a batch size of 128 and a maximum sequence length of 2048 tokens. A cosine learning rate schedule with a warmup ratio of 0.1 is employed. Different models' learning rates are shown in the table below. <table class="js-sort-table table hidden"> <tr> <td class="js-sort-string"><strong>Target Models</strong></td> <td class="js-sort-string"><strong>Learning rate</strong></td> </tr> <tr> <td>Llama-3.1-8B-Instruct</td> <td>5e-6</td> </tr> <tr> <td>Qwen-2.5-7B-Instruct</td> <td>2e-6</td> </tr> <tr> <td>Gemma-2-9B-It</td> <td>2e-6</td> </tr> <tr> <td>Llama-3.2-(1/3)B-Instruct</td> <td>5e-6</td> </tr> </table> ### DPO We used [alignment-handbook](https://github.com/huggingface/alignment-handbook) as our DPO training library. For all Target SFT models, we trained for 1 epoch, set maximum sequence length to 2048, used cosine learning rate with a warmup ratio of 0.1. We saved checkpoints every 100 steps and selected the best from the last two checkpoints. For Llama-3.1 and Llama-3.2 series models, we introduced length normalization in DPO training, as shown in the formula below. ![Length Normalized DPO Formula](https://latex.codecogs.com/svg.image?\mathcal{L}_{\text{LN-DPO}}=-\log\sigma\left(\frac{\beta}{|y_w|}\log\frac{\pi_\theta(y_w|x)}{\pi_{\text{ref}}(y_w|x)}-\frac{\beta}{|y_l|}\log\frac{\pi_\theta(y_l|x)}{\pi_{\text{ref}}(y_l|x)}\right)) Different models' hyperparameters are shown in the table below. <table class="js-sort-table table hidden"> <tr> <td class="js-sort-string"><strong>Target SFT Models</strong></td> <td class="js-sort-string"><strong>Learning rate</strong></td> <td class="js-sort-string"><strong>β</strong></td> <td class="js-sort-string"><strong>Length normalize</strong></td> </tr> <tr> <td>FuseChat-Llama-3.1-8B-SFT</td> <td>8e-7</td> <td>10</td> <td>Yes</td> </tr> <tr> <td>FuseChat-Qwen-2.5-7B-SFT</td> <td>3e-7</td> <td>0.01</td> <td>No</td> </tr> <tr> <td>FuseChat-Gemma-2-9B-SFT</td> <td>5e-7</td> <td>0.01</td> <td>No</td> </tr> <tr> <td>FuseChat-Llama-3.2-(1/3)B-SFT</td> <td>1e-6</td> <td>10</td> <td>Yes</td> </tr> </table> ## Evaluation The evaluation of instruction-tuned models mainly focuses on the model performance of instruction following, natural language understanding, general question answering, reasoning, mathematics, coding, etc. For the evaluation of FuseChat-3.0, we include 14 benchmarks and organize them into four categories: - **Instruction Following** Tasks: AlpacaEval-2, Arena-Hard, MTbench, AlignBench v1.1 (Chinese). - **General** Tasks: LiveBench-0831, MMLU-Pro, MMLU-redux, GPQA-Diamond. - **Mathematics** Tasks: GSM8K, MATH, AMC 23. - **Coding** Tasks: HumanEval, MBPP, LiveCodeBench 2408-2411. We include more details and release our evaluation code at [FuseEval](https://github.com/SLIT-AI/FuseChat-3.0/FuseEval). The evaluation results of five series fused models are as follows, showing that our FuseChat-3.0 models achieved varying degrees of improvement across different target models. When selecting Llama-3.1-8B-Instruct as the target model, our fusion model **FuseChat-Llama-3.1-8B-Instruct achieved an average performance improvement of 6.8 points across 14 benchmarks. Notably, it showed significant improvements of 37.1 and 30.1 points on instruction-following test sets AlpacaEval-2 and Arena-Hard respectively**. Additionally, FuseChat-Llama-3.1-8B-Instruct outperformed AllenAI's recently released Llama-3.1-Tulu-3-8B model on all benchmarks except GSM8K and GPQA-Diamond. All these results demonstrate the effectiveness and success of FuseChat-3.0. ### FuseChat-Llama-3.2-1B-Instruct Performance <table class="js-sort-table table hidden"> <tr> <td class="js-sort-string"><strong>Benchmarks</strong></td> <td class="js-sort-string"><strong>Llama-3.2-1B-Instruct</strong></td> <td class="js-sort-string"><strong>FuseChat-Llama-3.2-1B-SFT</strong></td> <td class="js-sort-string"><strong>FuseChat-Llama-3.2-1B-Instruct</strong></td> </tr> <tr> <td style="white-space: nowrap;">AlpacaEval-2 (LC %)</td> <td>9.7</td> <td>14.0</td> <td><strong>25.3</strong></td> </tr> <tr> <td>Arena-Hard (WR %)</td> <td>5.1</td> <td>6.0</td> <td><strong>8.6</strong></td> </tr> <tr> <td>MT-Bench</td> <td>4.7</td> <td>5.2</td> <td><strong>5.7</strong></td> </tr> <tr> <td>AlignBench v1.1</td> <td>2.9</td> <td>3.9</td> <td><strong>4.3</strong></td> </tr> <tr> <td>GSM8K</td> <td>46.3</td> <td><strong>55.6</strong></td> <td>54.5</td> </tr> <tr> <td>MATH</td> <td>32.7</td> <td><strong>34.7</strong></td> <td>33.6</td> </tr> <tr> <td>AMC23</td> <td>17.5</td> <td>15.0</td> <td><strong>20.0</strong></td> </tr> <tr> <td>LiveBench 0831</td> <td>14.0</td> <td>13.9</td> <td><strong>15.8</strong></td> </tr> <tr> <td>MMLU-Pro</td> <td><strong>22.3</strong></td> <td>21.5</td> <td>21.3</td> </tr> <tr> <td>MMLU</td> <td><strong>45.8</strong></td> <td>45.0</td> <td>44.8</td> </tr> <tr> <td>GPQA-Diamond</td> <td>21.2</td> <td><strong>25.3</strong></td> <td>24.2</td> </tr> <tr> <td>HumanEval</td> <td>39.6</td> <td>36.6</td> <td><strong>40.2</strong></td> </tr> <tr> <td>MBPP</td> <td><strong>49.5</strong></td> <td>42.1</td> <td>46.6</td> </tr> <tr> <td>Average</td> <td>24.0</td> <td>24.5</td> <td><strong>26.5</strong></td> </tr> </table> ## Citation ``` @inproceedings{yang2025weightedreward, title={Weighted-Reward Preference Optimization for Implicit Model Fusion}, author={Ziyi Yang and Fanqi Wan and Longguang Zhong and Tianyuan Shi and Xiaojun Quan}, booktitle={The Thirteenth International Conference on Learning Representations}, year={2025}, url={https://openreview.net/forum?id=fq24pEb8SL} } ```
{"datasets": ["FuseAI/FuseChat-3.0-DPO-Data"]}
task
[ "QUESTION_ANSWERING" ]
46,072
PauloAguirra/summarize
PauloAguirra
summarization
[ "summarization", "en", "pt", "dataset:neuralwork/arxiver", "base_model:openai/whisper-large-v3-turbo", "base_model:finetune:openai/whisper-large-v3-turbo", "license:mit", "region:us" ]
2024-11-09T14:50:31Z
2024-11-09T14:52:31+00:00
0
0
--- base_model: - openai/whisper-large-v3-turbo datasets: - neuralwork/arxiver language: - en - pt license: mit pipeline_tag: summarization ---
null
Non_BioNLP
{"base_model": ["openai/whisper-large-v3-turbo"], "datasets": ["neuralwork/arxiver"], "language": ["en", "pt"], "license": "mit", "pipeline_tag": "summarization"}
task
[ "SUMMARIZATION" ]
46,073
Helsinki-NLP/opus-mt-is-en
Helsinki-NLP
translation
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "is", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2023-08-16T11:58:30+00:00
3,190
1
--- license: apache-2.0 tags: - translation --- ### opus-mt-is-en * source languages: is * target languages: en * OPUS readme: [is-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/is-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/is-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/is-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/is-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.is.en | 51.4 | 0.672 |
null
Non_BioNLP
### opus-mt-is-en * source languages: is * target languages: en * OPUS readme: [is-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/is-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/is-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/is-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/is-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.is.en | 51.4 | 0.672 |
{"license": "apache-2.0", "tags": ["translation"]}
task
[ "TRANSLATION" ]
46,074
RUCAIBox/mvp-open-dialog
RUCAIBox
text2text-generation
[ "transformers", "pytorch", "mvp", "text-generation", "text2text-generation", "conversational", "en", "arxiv:2206.12131", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2022-06-02T11:53:44Z
2022-06-27T02:28:00+00:00
91
1
--- language: - en license: apache-2.0 pipeline_tag: text2text-generation tags: - text-generation - text2text-generation - conversational widget: - text: 'Given the dialog: do you like dance? [SEP] Yes I do. Did you know Bruce Lee was a cha cha dancer?' example_title: Example1 - text: 'Given the dialog: i used to scare for darkness [X_SEP] it feels like hitting to blank wall when i see the darkness [SEP] Oh ya? I don''t really see how [SEP] dont you feel so.. its a wonder [SEP] I do actually hit blank walls a lot of times but i get by' example_title: Example2 --- # MVP-open-dialog The MVP-open-dialog model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP). ## Model Description MVP-open-dialog is a prompt-based model that MVP is further equipped with prompts pre-trained using labeled open dialogue system datasets. It is a variant (MVP+S) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a Transformer encoder-decoder architecture with layer-wise prompts. MVP-open-dialog is specially designed for open dialogue system (conversation) tasks, such as chitchat (PersonaChat, DailyDialog), knowledge grounded conversation (Topical-Chat, Wizard of Wikipedia) and visual dialog (DSTC7-AVSD). ## Example ```python >>> from transformers import MvpTokenizer, MvpForConditionalGeneration >>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp") >>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp-open-dialog") >>> inputs = tokenizer( ... "Given the dialog: do you like dance? [SEP] Yes I do. Did you know Bruce Lee was a cha cha dancer?", ... return_tensors="pt", ... ) >>> generated_ids = model.generate(**inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ['I did not know that. I did know that Tupac danced ballet in high school.'] ``` ## Related Models **MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp). **Prompt-based models**: - MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task). - MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization). - MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog). - MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text). - MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story). - MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering). - MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation). - MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog). **Multi-task models**: - MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization). - MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog). - MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text). - MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story). - MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering). - MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation). - MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog). ## Citation ```bibtex @article{tang2022mvp, title={MVP: Multi-task Supervised Pre-training for Natural Language Generation}, author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong}, journal={arXiv preprint arXiv:2206.12131}, year={2022}, url={https://arxiv.org/abs/2206.12131}, } ```
null
Non_BioNLP
# MVP-open-dialog The MVP-open-dialog model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP). ## Model Description MVP-open-dialog is a prompt-based model that MVP is further equipped with prompts pre-trained using labeled open dialogue system datasets. It is a variant (MVP+S) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a Transformer encoder-decoder architecture with layer-wise prompts. MVP-open-dialog is specially designed for open dialogue system (conversation) tasks, such as chitchat (PersonaChat, DailyDialog), knowledge grounded conversation (Topical-Chat, Wizard of Wikipedia) and visual dialog (DSTC7-AVSD). ## Example ```python >>> from transformers import MvpTokenizer, MvpForConditionalGeneration >>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp") >>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp-open-dialog") >>> inputs = tokenizer( ... "Given the dialog: do you like dance? [SEP] Yes I do. Did you know Bruce Lee was a cha cha dancer?", ... return_tensors="pt", ... ) >>> generated_ids = model.generate(**inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ['I did not know that. I did know that Tupac danced ballet in high school.'] ``` ## Related Models **MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp). **Prompt-based models**: - MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task). - MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization). - MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog). - MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text). - MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story). - MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering). - MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation). - MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog). **Multi-task models**: - MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization). - MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog). - MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text). - MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story). - MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering). - MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation). - MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog). ## Citation ```bibtex @article{tang2022mvp, title={MVP: Multi-task Supervised Pre-training for Natural Language Generation}, author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong}, journal={arXiv preprint arXiv:2206.12131}, year={2022}, url={https://arxiv.org/abs/2206.12131}, } ```
{"language": ["en"], "license": "apache-2.0", "pipeline_tag": "text2text-generation", "tags": ["text-generation", "text2text-generation", "conversational"], "widget": [{"text": "Given the dialog: do you like dance? [SEP] Yes I do. Did you know Bruce Lee was a cha cha dancer?", "example_title": "Example1"}, {"text": "Given the dialog: i used to scare for darkness [X_SEP] it feels like hitting to blank wall when i see the darkness [SEP] Oh ya? I don't really see how [SEP] dont you feel so.. its a wonder [SEP] I do actually hit blank walls a lot of times but i get by", "example_title": "Example2"}]}
task
[ "SUMMARIZATION" ]
46,075
neuralmagic/Qwen2-VL-72B-Instruct-FP8-dynamic
neuralmagic
image-text-to-text
[ "transformers", "safetensors", "qwen2_vl", "image-text-to-text", "vllm", "vision", "w4a16", "conversational", "en", "base_model:Qwen/Qwen2-VL-72B-Instruct", "base_model:quantized:Qwen/Qwen2-VL-72B-Instruct", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "compressed-tensors", "region:us" ]
2024-11-14T15:38:47Z
2025-03-31T23:46:45+00:00
131
0
--- base_model: Qwen/Qwen2-VL-72B-Instruct language: - en library_name: transformers license: apache-2.0 license_link: https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md tags: - vllm - vision - w4a16 --- # Qwen2-VL-72B-Instruct-quantized-w4a16 ## Model Overview - **Model Architecture:** Qwen/Qwen2-VL-72B-Instruct - **Input:** Vision-Text - **Output:** Text - **Model Optimizations:** - **Weight quantization:** FP8 - **Activation quantization:** FP8 - **Release Date:** 2/24/2025 - **Version:** 1.0 - **Model Developers:** Neural Magic Quantized version of [Qwen/Qwen2-VL-72B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-72B-Instruct). ### Model Optimizations This model was obtained by quantizing the weights of [Qwen/Qwen2-VL-72B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-72B-Instruct) to FP8 data type, ready for inference with vLLM >= 0.5.2. ## Deployment ### Use with vLLM This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below. ```python from vllm.assets.image import ImageAsset from vllm import LLM, SamplingParams # prepare model llm = LLM( model="neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16", trust_remote_code=True, max_model_len=4096, max_num_seqs=2, ) # prepare inputs question = "What is the content of this image?" inputs = { "prompt": f"<|user|>\n<|image_1|>\n{question}<|end|>\n<|assistant|>\n", "multi_modal_data": { "image": ImageAsset("cherry_blossom").pil_image.convert("RGB") }, } # generate response print("========== SAMPLE GENERATION ==============") outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64)) print(f"PROMPT : {outputs[0].prompt}") print(f"RESPONSE: {outputs[0].outputs[0].text}") print("==========================================") ``` vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details. ## Creation This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below as part a multimodal announcement blog. <details> <summary>Model Creation Code</summary> ```python from transformers import AutoProcessor, Qwen2VLForConditionalGeneration from llmcompressor.modifiers.quantization import QuantizationModifier from llmcompressor.transformers import oneshot, wrap_hf_model_class MODEL_ID = "Qwen/Qwen2-VL-72B-Instruct" # Load model. model_class = wrap_hf_model_class(Qwen2VLForConditionalGeneration) model = model_class.from_pretrained(MODEL_ID, device_map="auto", torch_dtype="auto") processor = AutoProcessor.from_pretrained(MODEL_ID) # Configure the quantization algorithm and scheme. # In this case, we: # * quantize the weights to fp8 with per channel via ptq # * quantize the activations to fp8 with dynamic per token recipe = QuantizationModifier( targets="Linear", scheme="FP8_DYNAMIC", ignore=["re:.*lm_head", "re:visual.*"], ) # Apply quantization and save to disk in compressed-tensors format. SAVE_DIR = MODEL_ID.split("/")[1] + "-FP8-dynamic" oneshot(model=model, recipe=recipe, output_dir=SAVE_DIR) processor.save_pretrained(SAVE_DIR) # Confirm generations of the quantized model look sane. print("========== SAMPLE GENERATION ==============") input_ids = processor(text="Hello my name is", return_tensors="pt").input_ids.to("cuda") output = model.generate(input_ids, max_new_tokens=20) print(processor.decode(output[0])) print("==========================================") ``` </details> ## Evaluation The model was evaluated using [mistral-evals](https://github.com/neuralmagic/mistral-evals) for vision-related tasks and using [lm_evaluation_harness](https://github.com/neuralmagic/lm-evaluation-harness) for select text-based benchmarks. The evaluations were conducted using the following commands: <details> <summary>Evaluation Commands</summary> ### Vision Tasks - vqav2 - docvqa - mathvista - mmmu - chartqa ``` vllm serve neuralmagic/pixtral-12b-quantized.w8a8 --tensor_parallel_size 1 --max_model_len 25000 --trust_remote_code --max_num_seqs 8 --gpu_memory_utilization 0.9 --dtype float16 --limit_mm_per_prompt image=7 python -m eval.run eval_vllm \ --model_name neuralmagic/pixtral-12b-quantized.w8a8 \ --url http://0.0.0.0:8000 \ --output_dir ~/tmp \ --eval_name <vision_task_name> ``` ### Text-based Tasks #### MMLU ``` lm_eval \ --model vllm \ --model_args pretrained="<model_name>",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=<n>,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \ --tasks mmlu \ --num_fewshot 5 \ --batch_size auto \ --output_path output_dir ``` #### MGSM ``` lm_eval \ --model vllm \ --model_args pretrained="<model_name>",dtype=auto,max_model_len=4096,max_gen_toks=2048,max_num_seqs=128,tensor_parallel_size=<n>,gpu_memory_utilization=0.9 \ --tasks mgsm_cot_native \ --num_fewshot 0 \ --batch_size auto \ --output_path output_dir ``` </details> ### Accuracy <table> <thead> <tr> <th>Category</th> <th>Metric</th> <th>Qwen/Qwen2-VL-72B-Instruct</th> <th>neuralmagic/Qwen2-VL-72B-Instruct-FP8-Dynamic</th> <th>Recovery (%)</th> </tr> </thead> <tbody> <tr> <td rowspan="6"><b>Vision</b></td> <td>MMMU (val, CoT)<br><i>explicit_prompt_relaxed_correctness</i></td> <td>62.11</td> <td>60.67</td> <td>97.68%</td> </tr> <tr> <td>VQAv2 (val)<br><i>vqa_match</i></td> <td>82.51</td> <td>82.44</td> <td>99.91%</td> </tr> <tr> <td>DocVQA (val)<br><i>anls</i></td> <td>95.01</td> <td>95.10</td> <td>100.09%</td> </tr> <tr> <td>ChartQA (test, CoT)<br><i>anywhere_in_answer_relaxed_correctness</i></td> <td>83.40</td> <td>83.68</td> <td>100.34%</td> </tr> <tr> <td>Mathvista (testmini, CoT)<br><i>explicit_prompt_relaxed_correctness</i></td> <td>66.57</td> <td>67.07</td> <td>100.75%</td> </tr> <tr> <td><b>Average Score</b></td> <td><b>77.12</b></td> <td><b>77.39</b></td> <td><b>100.35%</b></td> </tr> <tr> <td rowspan="2"><b>Text</b></td> <td>MGSM (CoT)</td> <td>68.60</td> <td>67.78</td> <td>98.80%</td> </tr> <tr> <td>MMLU (5-shot)</td> <td>82.70</td> <td>82.60</td> <td>99.88%</td> </tr> </tbody> </table> ## Inference Performance This model achieves up to 1.84x speedup in single-stream deployment and up to 1.85x speedup in multi-stream asynchronous deployment, depending on hardware and use-case scenario. The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.7.2, and [GuideLLM](https://github.com/neuralmagic/guidellm). <details> <summary>Benchmarking Command</summary> ``` guidellm --model neuralmagic/Qwen2-VL-72B-Instruct-FP8-Dynamic --target "http://localhost:8000/v1" --data-type emulated --data prompt_tokens=<prompt_tokens>,generated_tokens=<generated_tokens>,images=<num_images>,width=<image_width>,height=<image_height> --max seconds 120 --backend aiohttp_server ``` </details> ### Single-stream performance (measured with vLLM version 0.7.2) <table border="1" class="dataframe"> <thead> <tr> <th></th> <th></th> <th></th> <th style="text-align: center;" colspan="2" >Document Visual Question Answering<br>1680W x 2240H<br>64/128</th> <th style="text-align: center;" colspan="2" >Visual Reasoning <br>640W x 480H<br>128/128</th> <th style="text-align: center;" colspan="2" >Image Captioning<br>480W x 360H<br>0/128</th> </tr> <tr> <th>Hardware</th> <th>Number of GPUs</th> <th>Model</th> <th>Average Cost Reduction</th> <th>Latency (s)</th> <th>QPD</th> <th>Latency (s)th> <th>QPD</th> <th>Latency (s)</th> <th>QPD</th> </tr> </thead> <tbody> <tr> <th rowspan="3" valign="top">A100</th> <td>4</td> <td>Qwen/Qwen2-VL-72B-Instruct</td> <td></td> <td>6.5</td> <td>77</td> <td>4.6</td> <td>110</td> <td>4.4</td> <td>113</td> </tr> <tr> <td>2</td> <td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w8a8</td> <td>1.85</td> <td>7.2</td> <td>139</td> <td>4.9</td> <td>206</td> <td>4.8</td> <td>211</td> </tr> <tr> <td>1</td> <td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16</td> <td>3.32</td> <td>10.0</td> <td>202</td> <td>5.0</td> <td>398</td> <td>4.8</td> <td>419</td> </tr> <tr> <th rowspan="3" valign="top">H100</td> <td>4</td> <td>Qwen/Qwen2-VL-72B-Instruct</td> <td></td> <td>4.4</td> <td>66</td> <td>3.0</td> <td>97</td> <td>2.9</td> <td>99</td> </tr> <tr> <td>2</td> <td>neuralmagic/Qwen2-VL-72B-Instruct-FP8-Dynamic</td> <td>1.79</td> <td>4.7</td> <td>119</td> <td>3.3</td> <td>173</td> <td>3.2</td> <td>177</td> </tr> <tr> <td>1</td> <td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16</td> <td>2.60</td> <td>6.4</td> <td>172</td> <td>4.3</td> <td>253</td> <td>4.2</td> <td>259</td> </tr> </tbody> </table> **Use case profiles: Image Size (WxH) / prompt tokens / generation tokens **QPD: Queries per dollar, based on on-demand cost at [Lambda Labs](https://lambdalabs.com/service/gpu-cloud) (observed on 2/18/2025). ### Multi-stream asynchronous performance (measured with vLLM version 0.7.2) <table border="1" class="dataframe"> <thead> <tr> <th></th> <th></th> <th></th> <th style="text-align: center;" colspan="2" >Document Visual Question Answering<br>1680W x 2240H<br>64/128</th> <th style="text-align: center;" colspan="2" >Visual Reasoning <br>640W x 480H<br>128/128</th> <th style="text-align: center;" colspan="2" >Image Captioning<br>480W x 360H<br>0/128</th> </tr> <tr> <th>Hardware</th> <th>Model</th> <th>Average Cost Reduction</th> <th>Maximum throughput (QPS)</th> <th>QPD</th> <th>Maximum throughput (QPS)</th> <th>QPD</th> <th>Maximum throughput (QPS)</th> <th>QPD</th> </tr> </thead> <tbody> <tr> <th rowspan="3" valign="top">A100x4</th> <td>Qwen/Qwen2-VL-72B-Instruct</td> <td></td> <td>0.3</td> <td>169</td> <td>1.1</td> <td>538</td> <td>1.2</td> <td>595</td> </tr> <tr> <td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w8a8</td> <td>1.84</td> <td>0.6</td> <td>293</td> <td>2.0</td> <td>1021</td> <td>2.3</td> <td>1135</td> </tr> <tr> <td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16</td> <td>2.73</td> <td>0.6</td> <td>314</td> <td>3.2</td> <td>1591</td> <td>4.0</td> <td>2019</td> </tr> <tr> <th rowspan="3" valign="top">H100x4</td> <td>Qwen/Qwen2-VL-72B-Instruct</td> <td></td> <td>0.5</td> <td>137</td> <td>1.2</td> <td>356</td> <td>1.3</td> <td>377</td> </tr> <tr> <td>neuralmagic/Qwen2-VL-72B-Instruct-FP8-Dynamic</td> <td>1.70</td> <td>0.8</td> <td>236</td> <td>2.2</td> <td>623</td> <td>2.4</td> <td>669</td> </tr> <tr> <td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16</td> <td>2.35</td> <td>1.3</td> <td>350</td> <td>3.3</td> <td>910</td> <td>3.6</td> <td>994</td> </tr> </tbody> </table> **Use case profiles: Image Size (WxH) / prompt tokens / generation tokens **QPS: Queries per second. **QPD: Queries per dollar, based on on-demand cost at [Lambda Labs](https://lambdalabs.com/service/gpu-cloud) (observed on 2/18/2025).
null
Non_BioNLP
# Qwen2-VL-72B-Instruct-quantized-w4a16 ## Model Overview - **Model Architecture:** Qwen/Qwen2-VL-72B-Instruct - **Input:** Vision-Text - **Output:** Text - **Model Optimizations:** - **Weight quantization:** FP8 - **Activation quantization:** FP8 - **Release Date:** 2/24/2025 - **Version:** 1.0 - **Model Developers:** Neural Magic Quantized version of [Qwen/Qwen2-VL-72B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-72B-Instruct). ### Model Optimizations This model was obtained by quantizing the weights of [Qwen/Qwen2-VL-72B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-72B-Instruct) to FP8 data type, ready for inference with vLLM >= 0.5.2. ## Deployment ### Use with vLLM This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below. ```python from vllm.assets.image import ImageAsset from vllm import LLM, SamplingParams # prepare model llm = LLM( model="neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16", trust_remote_code=True, max_model_len=4096, max_num_seqs=2, ) # prepare inputs question = "What is the content of this image?" inputs = { "prompt": f"<|user|>\n<|image_1|>\n{question}<|end|>\n<|assistant|>\n", "multi_modal_data": { "image": ImageAsset("cherry_blossom").pil_image.convert("RGB") }, } # generate response print("========== SAMPLE GENERATION ==============") outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64)) print(f"PROMPT : {outputs[0].prompt}") print(f"RESPONSE: {outputs[0].outputs[0].text}") print("==========================================") ``` vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details. ## Creation This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below as part a multimodal announcement blog. <details> <summary>Model Creation Code</summary> ```python from transformers import AutoProcessor, Qwen2VLForConditionalGeneration from llmcompressor.modifiers.quantization import QuantizationModifier from llmcompressor.transformers import oneshot, wrap_hf_model_class MODEL_ID = "Qwen/Qwen2-VL-72B-Instruct" # Load model. model_class = wrap_hf_model_class(Qwen2VLForConditionalGeneration) model = model_class.from_pretrained(MODEL_ID, device_map="auto", torch_dtype="auto") processor = AutoProcessor.from_pretrained(MODEL_ID) # Configure the quantization algorithm and scheme. # In this case, we: # * quantize the weights to fp8 with per channel via ptq # * quantize the activations to fp8 with dynamic per token recipe = QuantizationModifier( targets="Linear", scheme="FP8_DYNAMIC", ignore=["re:.*lm_head", "re:visual.*"], ) # Apply quantization and save to disk in compressed-tensors format. SAVE_DIR = MODEL_ID.split("/")[1] + "-FP8-dynamic" oneshot(model=model, recipe=recipe, output_dir=SAVE_DIR) processor.save_pretrained(SAVE_DIR) # Confirm generations of the quantized model look sane. print("========== SAMPLE GENERATION ==============") input_ids = processor(text="Hello my name is", return_tensors="pt").input_ids.to("cuda") output = model.generate(input_ids, max_new_tokens=20) print(processor.decode(output[0])) print("==========================================") ``` </details> ## Evaluation The model was evaluated using [mistral-evals](https://github.com/neuralmagic/mistral-evals) for vision-related tasks and using [lm_evaluation_harness](https://github.com/neuralmagic/lm-evaluation-harness) for select text-based benchmarks. The evaluations were conducted using the following commands: <details> <summary>Evaluation Commands</summary> ### Vision Tasks - vqav2 - docvqa - mathvista - mmmu - chartqa ``` vllm serve neuralmagic/pixtral-12b-quantized.w8a8 --tensor_parallel_size 1 --max_model_len 25000 --trust_remote_code --max_num_seqs 8 --gpu_memory_utilization 0.9 --dtype float16 --limit_mm_per_prompt image=7 python -m eval.run eval_vllm \ --model_name neuralmagic/pixtral-12b-quantized.w8a8 \ --url http://0.0.0.0:8000 \ --output_dir ~/tmp \ --eval_name <vision_task_name> ``` ### Text-based Tasks #### MMLU ``` lm_eval \ --model vllm \ --model_args pretrained="<model_name>",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=<n>,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \ --tasks mmlu \ --num_fewshot 5 \ --batch_size auto \ --output_path output_dir ``` #### MGSM ``` lm_eval \ --model vllm \ --model_args pretrained="<model_name>",dtype=auto,max_model_len=4096,max_gen_toks=2048,max_num_seqs=128,tensor_parallel_size=<n>,gpu_memory_utilization=0.9 \ --tasks mgsm_cot_native \ --apply_chat_template \ --num_fewshot 0 \ --batch_size auto \ --output_path output_dir ``` </details> ### Accuracy <table> <thead> <tr> <th>Category</th> <th>Metric</th> <th>Qwen/Qwen2-VL-72B-Instruct</th> <th>neuralmagic/Qwen2-VL-72B-Instruct-FP8-Dynamic</th> <th>Recovery (%)</th> </tr> </thead> <tbody> <tr> <td rowspan="6"><b>Vision</b></td> <td>MMMU (val, CoT)<br><i>explicit_prompt_relaxed_correctness</i></td> <td>62.11</td> <td>60.67</td> <td>97.68%</td> </tr> <tr> <td>VQAv2 (val)<br><i>vqa_match</i></td> <td>82.51</td> <td>82.44</td> <td>99.91%</td> </tr> <tr> <td>DocVQA (val)<br><i>anls</i></td> <td>95.01</td> <td>95.10</td> <td>100.09%</td> </tr> <tr> <td>ChartQA (test, CoT)<br><i>anywhere_in_answer_relaxed_correctness</i></td> <td>83.40</td> <td>83.68</td> <td>100.34%</td> </tr> <tr> <td>Mathvista (testmini, CoT)<br><i>explicit_prompt_relaxed_correctness</i></td> <td>66.57</td> <td>67.07</td> <td>100.75%</td> </tr> <tr> <td><b>Average Score</b></td> <td><b>77.12</b></td> <td><b>77.39</b></td> <td><b>100.35%</b></td> </tr> <tr> <td rowspan="2"><b>Text</b></td> <td>MGSM (CoT)</td> <td>68.60</td> <td>67.78</td> <td>98.80%</td> </tr> <tr> <td>MMLU (5-shot)</td> <td>82.70</td> <td>82.60</td> <td>99.88%</td> </tr> </tbody> </table> ## Inference Performance This model achieves up to 1.84x speedup in single-stream deployment and up to 1.85x speedup in multi-stream asynchronous deployment, depending on hardware and use-case scenario. The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.7.2, and [GuideLLM](https://github.com/neuralmagic/guidellm). <details> <summary>Benchmarking Command</summary> ``` guidellm --model neuralmagic/Qwen2-VL-72B-Instruct-FP8-Dynamic --target "http://localhost:8000/v1" --data-type emulated --data prompt_tokens=<prompt_tokens>,generated_tokens=<generated_tokens>,images=<num_images>,width=<image_width>,height=<image_height> --max seconds 120 --backend aiohttp_server ``` </details> ### Single-stream performance (measured with vLLM version 0.7.2) <table border="1" class="dataframe"> <thead> <tr> <th></th> <th></th> <th></th> <th style="text-align: center;" colspan="2" >Document Visual Question Answering<br>1680W x 2240H<br>64/128</th> <th style="text-align: center;" colspan="2" >Visual Reasoning <br>640W x 480H<br>128/128</th> <th style="text-align: center;" colspan="2" >Image Captioning<br>480W x 360H<br>0/128</th> </tr> <tr> <th>Hardware</th> <th>Number of GPUs</th> <th>Model</th> <th>Average Cost Reduction</th> <th>Latency (s)</th> <th>QPD</th> <th>Latency (s)th> <th>QPD</th> <th>Latency (s)</th> <th>QPD</th> </tr> </thead> <tbody> <tr> <th rowspan="3" valign="top">A100</th> <td>4</td> <td>Qwen/Qwen2-VL-72B-Instruct</td> <td></td> <td>6.5</td> <td>77</td> <td>4.6</td> <td>110</td> <td>4.4</td> <td>113</td> </tr> <tr> <td>2</td> <td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w8a8</td> <td>1.85</td> <td>7.2</td> <td>139</td> <td>4.9</td> <td>206</td> <td>4.8</td> <td>211</td> </tr> <tr> <td>1</td> <td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16</td> <td>3.32</td> <td>10.0</td> <td>202</td> <td>5.0</td> <td>398</td> <td>4.8</td> <td>419</td> </tr> <tr> <th rowspan="3" valign="top">H100</td> <td>4</td> <td>Qwen/Qwen2-VL-72B-Instruct</td> <td></td> <td>4.4</td> <td>66</td> <td>3.0</td> <td>97</td> <td>2.9</td> <td>99</td> </tr> <tr> <td>2</td> <td>neuralmagic/Qwen2-VL-72B-Instruct-FP8-Dynamic</td> <td>1.79</td> <td>4.7</td> <td>119</td> <td>3.3</td> <td>173</td> <td>3.2</td> <td>177</td> </tr> <tr> <td>1</td> <td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16</td> <td>2.60</td> <td>6.4</td> <td>172</td> <td>4.3</td> <td>253</td> <td>4.2</td> <td>259</td> </tr> </tbody> </table> **Use case profiles: Image Size (WxH) / prompt tokens / generation tokens **QPD: Queries per dollar, based on on-demand cost at [Lambda Labs](https://lambdalabs.com/service/gpu-cloud) (observed on 2/18/2025). ### Multi-stream asynchronous performance (measured with vLLM version 0.7.2) <table border="1" class="dataframe"> <thead> <tr> <th></th> <th></th> <th></th> <th style="text-align: center;" colspan="2" >Document Visual Question Answering<br>1680W x 2240H<br>64/128</th> <th style="text-align: center;" colspan="2" >Visual Reasoning <br>640W x 480H<br>128/128</th> <th style="text-align: center;" colspan="2" >Image Captioning<br>480W x 360H<br>0/128</th> </tr> <tr> <th>Hardware</th> <th>Model</th> <th>Average Cost Reduction</th> <th>Maximum throughput (QPS)</th> <th>QPD</th> <th>Maximum throughput (QPS)</th> <th>QPD</th> <th>Maximum throughput (QPS)</th> <th>QPD</th> </tr> </thead> <tbody> <tr> <th rowspan="3" valign="top">A100x4</th> <td>Qwen/Qwen2-VL-72B-Instruct</td> <td></td> <td>0.3</td> <td>169</td> <td>1.1</td> <td>538</td> <td>1.2</td> <td>595</td> </tr> <tr> <td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w8a8</td> <td>1.84</td> <td>0.6</td> <td>293</td> <td>2.0</td> <td>1021</td> <td>2.3</td> <td>1135</td> </tr> <tr> <td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16</td> <td>2.73</td> <td>0.6</td> <td>314</td> <td>3.2</td> <td>1591</td> <td>4.0</td> <td>2019</td> </tr> <tr> <th rowspan="3" valign="top">H100x4</td> <td>Qwen/Qwen2-VL-72B-Instruct</td> <td></td> <td>0.5</td> <td>137</td> <td>1.2</td> <td>356</td> <td>1.3</td> <td>377</td> </tr> <tr> <td>neuralmagic/Qwen2-VL-72B-Instruct-FP8-Dynamic</td> <td>1.70</td> <td>0.8</td> <td>236</td> <td>2.2</td> <td>623</td> <td>2.4</td> <td>669</td> </tr> <tr> <td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16</td> <td>2.35</td> <td>1.3</td> <td>350</td> <td>3.3</td> <td>910</td> <td>3.6</td> <td>994</td> </tr> </tbody> </table> **Use case profiles: Image Size (WxH) / prompt tokens / generation tokens **QPS: Queries per second. **QPD: Queries per dollar, based on on-demand cost at [Lambda Labs](https://lambdalabs.com/service/gpu-cloud) (observed on 2/18/2025).
{"base_model": "Qwen/Qwen2-VL-72B-Instruct", "language": ["en"], "library_name": "transformers", "license": "apache-2.0", "license_link": "https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md", "tags": ["vllm", "vision", "w4a16"]}
task
[ "QUESTION_ANSWERING" ]
46,076
DarkTronics/marian-finetuned-kde4-en-to-fr
DarkTronics
translation
[ "transformers", "tensorboard", "safetensors", "marian", "text2text-generation", "translation", "generated_from_trainer", "base_model:Helsinki-NLP/opus-mt-en-fr", "base_model:finetune:Helsinki-NLP/opus-mt-en-fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-11-23T21:53:18Z
2024-11-26T04:03:31+00:00
5
0
--- base_model: Helsinki-NLP/opus-mt-en-fr library_name: transformers license: apache-2.0 tags: - translation - generated_from_trainer model-index: - name: marian-finetuned-kde4-en-to-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.45.2 - Pytorch 2.5.1+cu121 - Tokenizers 0.20.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.45.2 - Pytorch 2.5.1+cu121 - Tokenizers 0.20.3
{"base_model": "Helsinki-NLP/opus-mt-en-fr", "library_name": "transformers", "license": "apache-2.0", "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "marian-finetuned-kde4-en-to-fr", "results": []}]}
task
[ "TRANSLATION" ]
46,077
celine98/canine-s-finetuned-sst2
celine98
text-classification
[ "transformers", "pytorch", "tensorboard", "canine", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-21T22:35:16Z
2022-03-22T09:47:45+00:00
99
2
--- datasets: - glue license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: canine-s-finetuned-sst2 results: - task: type: text-classification name: Text Classification dataset: name: glue type: glue args: sst2 metrics: - type: accuracy value: 0.8577981651376146 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # canine-s-finetuned-sst2 This model is a fine-tuned version of [google/canine-s](https://huggingface.co/google/canine-s) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5259 - Accuracy: 0.8578 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3524 | 1.0 | 4210 | 0.4762 | 0.8257 | | 0.2398 | 2.0 | 8420 | 0.4169 | 0.8567 | | 0.1797 | 3.0 | 12630 | 0.5259 | 0.8578 | | 0.152 | 4.0 | 16840 | 0.5996 | 0.8532 | | 0.1026 | 5.0 | 21050 | 0.6676 | 0.8578 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # canine-s-finetuned-sst2 This model is a fine-tuned version of [google/canine-s](https://huggingface.co/google/canine-s) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5259 - Accuracy: 0.8578 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3524 | 1.0 | 4210 | 0.4762 | 0.8257 | | 0.2398 | 2.0 | 8420 | 0.4169 | 0.8567 | | 0.1797 | 3.0 | 12630 | 0.5259 | 0.8578 | | 0.152 | 4.0 | 16840 | 0.5996 | 0.8532 | | 0.1026 | 5.0 | 21050 | 0.6676 | 0.8578 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
{"datasets": ["glue"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "canine-s-finetuned-sst2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "sst2"}, "metrics": [{"type": "accuracy", "value": 0.8577981651376146, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,078
Realgon/left_padding30model
Realgon
text-classification
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-11-07T14:06:44Z
2023-11-27T07:14:14+00:00
6
0
--- base_model: distilbert-base-uncased datasets: - imdb license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: left_padding30model results: - task: type: text-classification name: Text Classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - type: accuracy value: 0.93128 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # left_padding30model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Accuracy: 0.9313 - Loss: 0.6821 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:-----:|:--------:|:---------------:| | 0.0907 | 1.0 | 1563 | 0.9292 | 0.3495 | | 0.0396 | 2.0 | 3126 | 0.9298 | 0.3753 | | 0.0348 | 3.0 | 4689 | 0.9192 | 0.3895 | | 0.0484 | 4.0 | 6252 | 0.9294 | 0.4647 | | 0.0175 | 5.0 | 7815 | 0.9284 | 0.5595 | | 0.0202 | 6.0 | 9378 | 0.9299 | 0.5709 | | 0.0035 | 7.0 | 10941 | 0.9287 | 0.6317 | | 0.0 | 8.0 | 12504 | 0.9305 | 0.7006 | | 0.0045 | 9.0 | 14067 | 0.9310 | 0.6876 | | 0.0 | 10.0 | 15630 | 0.9313 | 0.6821 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.0.0+cu117 - Datasets 2.14.6 - Tokenizers 0.14.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # left_padding30model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Accuracy: 0.9313 - Loss: 0.6821 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:-----:|:--------:|:---------------:| | 0.0907 | 1.0 | 1563 | 0.9292 | 0.3495 | | 0.0396 | 2.0 | 3126 | 0.9298 | 0.3753 | | 0.0348 | 3.0 | 4689 | 0.9192 | 0.3895 | | 0.0484 | 4.0 | 6252 | 0.9294 | 0.4647 | | 0.0175 | 5.0 | 7815 | 0.9284 | 0.5595 | | 0.0202 | 6.0 | 9378 | 0.9299 | 0.5709 | | 0.0035 | 7.0 | 10941 | 0.9287 | 0.6317 | | 0.0 | 8.0 | 12504 | 0.9305 | 0.7006 | | 0.0045 | 9.0 | 14067 | 0.9310 | 0.6876 | | 0.0 | 10.0 | 15630 | 0.9313 | 0.6821 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.0.0+cu117 - Datasets 2.14.6 - Tokenizers 0.14.1
{"base_model": "distilbert-base-uncased", "datasets": ["imdb"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "left_padding30model", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "config": "plain_text", "split": "test", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 0.93128, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,079
RichardErkhov/NickyNicky_-_gemma-1.1-2b-it_orpo_traslate_en_es_V1-4bits
RichardErkhov
null
[ "safetensors", "gemma", "4-bit", "bitsandbytes", "region:us" ]
2025-01-22T21:38:13Z
2025-01-22T21:39:24+00:00
11
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) gemma-1.1-2b-it_orpo_traslate_en_es_V1 - bnb 4bits - Model creator: https://huggingface.co/NickyNicky/ - Original model: https://huggingface.co/NickyNicky/gemma-1.1-2b-it_orpo_traslate_en_es_V1/ Original model description: --- library_name: transformers license: apache-2.0 language: - en - es pipeline_tag: translation datasets: - NickyNicky/Colossal_Translation_EN_ES_ORPO_DPO_Gemma --- # Model Card for Model traslate_en_es <!-- Provide a quick summary of what the model is/does. --> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/641b435ba5f876fe30c5ae0a/WtXc5m_WUkDxmwlXOP_8R.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/641b435ba5f876fe30c5ae0a/fYeHoLQpJlfSJ4hoPSZJI.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/641b435ba5f876fe30c5ae0a/I3bbOBJ8-LfoAt152GzXn.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/641b435ba5f876fe30c5ae0a/QenTj8v5CjNf9JjDd3AoT.png) * p95 prompt length: 620 * p95 prompt + chosen length: 1278 ```py ## template gemma <bos><start_of_turn>system You are an AI that translates from English to Spanish.<end_of_turn> <start_of_turn>user 516 Total views 6-month low Ethereum fees suggest altseason is inbound: Santiment NEWS Own this piece of crypto history Collect this article as NFT Join us on social networks Gas fees on the Ethereum network have plunged to a six-month low even as the price of Ether ETH tickers down $3,260 rallied slightly over the weekend, which analysts from crypto analytics platform Santiment say could signal an upcoming altcoin rally. On April 27, the average fee for an Ethereum translation fell as low as $1.12, according to an April 28 X post from Santiment. “Traders historically move between sentimental cycles of feeling that crypto is going ‘To the Moon’ or feeling that ‘It Is Dead’, which can be observed through transaction fees,” wrote Santiment. <end_of_turn> <start_of_turn>model 516 Visitas totales Las bajas tasas de Ethereum sugerirían que la temporada de altseason está a punto de llegar: Santiment NEWS ¡Posee este artículo como NFT Únete a nosotros en redes sociales Las tarifas de gas en la red de Ethereum han descendido a un mínimo de seis meses incluso mientras el precio de Ether ETH descendió ligeramente durante el fin de semana, lo que los analistas de la plataforma de análisis de criptomonedas Santiment dicen podría señalar un posible aumento de altco. El 27 de abril, la tarifa promedio para una traducción de Ethereum cayó a un mínimo de $1.12, según un post de April 28 de Santiment. “Los comerciantes han históricomente moverse entre ciclos sentimentales de sentir que el cripto está ‘A la Luna’ o sentir que ‘Está muerto’, lo cual se puede observar a través de tarifas de transacciones,” escribió Santiment. <end_of_turn> ``` ``` input= tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to(model.device) max_new_tokens=1000 generation_config = GenerationConfig( max_new_tokens = max_new_tokens, temperature = .3, # top_p=0.55, top_k = 50, repetition_penalty = 1., do_sample=True, ) outputs = model.generate(**input, generation_config=generation_config, stopping_criteria=stopping_criteria_list, ) print(tokenizer.decode(outputs[0], skip_special_tokens=False) ) ``` ``` train dataset: 49252 values https://huggingface.co/datasets/NickyNicky/Colossal_Translation_EN_ES_ORPO_DPO_Gemma ``` ``` original dataset: https://huggingface.co/datasets/Iker/Colossal-Instruction-Translation-EN-ES ```
null
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) gemma-1.1-2b-it_orpo_traslate_en_es_V1 - bnb 4bits - Model creator: https://huggingface.co/NickyNicky/ - Original model: https://huggingface.co/NickyNicky/gemma-1.1-2b-it_orpo_traslate_en_es_V1/ Original model description: --- library_name: transformers license: apache-2.0 language: - en - es pipeline_tag: translation datasets: - NickyNicky/Colossal_Translation_EN_ES_ORPO_DPO_Gemma --- # Model Card for Model traslate_en_es <!-- Provide a quick summary of what the model is/does. --> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/641b435ba5f876fe30c5ae0a/WtXc5m_WUkDxmwlXOP_8R.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/641b435ba5f876fe30c5ae0a/fYeHoLQpJlfSJ4hoPSZJI.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/641b435ba5f876fe30c5ae0a/I3bbOBJ8-LfoAt152GzXn.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/641b435ba5f876fe30c5ae0a/QenTj8v5CjNf9JjDd3AoT.png) * p95 prompt length: 620 * p95 prompt + chosen length: 1278 ```py ## template gemma <bos><start_of_turn>system You are an AI that translates from English to Spanish.<end_of_turn> <start_of_turn>user 516 Total views 6-month low Ethereum fees suggest altseason is inbound: Santiment NEWS Own this piece of crypto history Collect this article as NFT Join us on social networks Gas fees on the Ethereum network have plunged to a six-month low even as the price of Ether ETH tickers down $3,260 rallied slightly over the weekend, which analysts from crypto analytics platform Santiment say could signal an upcoming altcoin rally. On April 27, the average fee for an Ethereum translation fell as low as $1.12, according to an April 28 X post from Santiment. “Traders historically move between sentimental cycles of feeling that crypto is going ‘To the Moon’ or feeling that ‘It Is Dead’, which can be observed through transaction fees,” wrote Santiment. <end_of_turn> <start_of_turn>model 516 Visitas totales Las bajas tasas de Ethereum sugerirían que la temporada de altseason está a punto de llegar: Santiment NEWS ¡Posee este artículo como NFT Únete a nosotros en redes sociales Las tarifas de gas en la red de Ethereum han descendido a un mínimo de seis meses incluso mientras el precio de Ether ETH descendió ligeramente durante el fin de semana, lo que los analistas de la plataforma de análisis de criptomonedas Santiment dicen podría señalar un posible aumento de altco. El 27 de abril, la tarifa promedio para una traducción de Ethereum cayó a un mínimo de $1.12, según un post de April 28 de Santiment. “Los comerciantes han históricomente moverse entre ciclos sentimentales de sentir que el cripto está ‘A la Luna’ o sentir que ‘Está muerto’, lo cual se puede observar a través de tarifas de transacciones,” escribió Santiment. <end_of_turn> ``` ``` input= tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to(model.device) max_new_tokens=1000 generation_config = GenerationConfig( max_new_tokens = max_new_tokens, temperature = .3, # top_p=0.55, top_k = 50, repetition_penalty = 1., do_sample=True, ) outputs = model.generate(**input, generation_config=generation_config, stopping_criteria=stopping_criteria_list, ) print(tokenizer.decode(outputs[0], skip_special_tokens=False) ) ``` ``` train dataset: 49252 values https://huggingface.co/datasets/NickyNicky/Colossal_Translation_EN_ES_ORPO_DPO_Gemma ``` ``` original dataset: https://huggingface.co/datasets/Iker/Colossal-Instruction-Translation-EN-ES ```
{}
task
[ "TRANSLATION" ]
46,080
SEBIS/code_trans_t5_small_source_code_summarization_csharp_multitask_finetune
SEBIS
summarization
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2021-06-23T10:20:50+00:00
125
0
--- tags: - summarization widget: - text: public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; } --- # CodeTrans model for source code summarization csharp Pretrained model on programming language csharp using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized csharp code functions: it works best with tokenized csharp functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the source code summarization task for the csharp code snippets. ## Intended uses & limitations The model could be used to generate the description for the csharp function or be fine-tuned on other csharp code tasks. It can be used on unparsed and untokenized csharp code. However, if the csharp code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_csharp_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_csharp_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/source%20code%20summarization/csharp/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 1200 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing csharp code. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
null
Non_BioNLP
# CodeTrans model for source code summarization csharp Pretrained model on programming language csharp using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized csharp code functions: it works best with tokenized csharp functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the source code summarization task for the csharp code snippets. ## Intended uses & limitations The model could be used to generate the description for the csharp function or be fine-tuned on other csharp code tasks. It can be used on unparsed and untokenized csharp code. However, if the csharp code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_csharp_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_csharp_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/source%20code%20summarization/csharp/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 1200 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing csharp code. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }"}]}
task
[ "SUMMARIZATION" ]
46,081
hsmashiana/base_model_hpml
hsmashiana
text-classification
[ "setfit", "safetensors", "bert", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/paraphrase-MiniLM-L3-v2", "base_model:finetune:sentence-transformers/paraphrase-MiniLM-L3-v2", "region:us" ]
2024-05-04T22:17:00Z
2024-05-04T22:17:03+00:00
17
0
--- base_model: sentence-transformers/paraphrase-MiniLM-L3-v2 library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: [] inference: true --- # SetFit with sentence-transformers/paraphrase-MiniLM-L3-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-MiniLM-L3-v2](https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L3-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-MiniLM-L3-v2](https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L3-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 128 tokens <!-- - **Number of Classes:** Unknown --> <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("setfit_model_id") # Run inference preds = model("I loved the spiderman movie!") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Framework Versions - Python: 3.11.4 - SetFit: 1.0.3 - Sentence Transformers: 2.7.0 - Transformers: 4.34.1 - PyTorch: 2.0.1 - Datasets: 2.12.0 - Tokenizers: 0.14.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SetFit with sentence-transformers/paraphrase-MiniLM-L3-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-MiniLM-L3-v2](https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L3-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-MiniLM-L3-v2](https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L3-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 128 tokens <!-- - **Number of Classes:** Unknown --> <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("setfit_model_id") # Run inference preds = model("I loved the spiderman movie!") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Framework Versions - Python: 3.11.4 - SetFit: 1.0.3 - Sentence Transformers: 2.7.0 - Transformers: 4.34.1 - PyTorch: 2.0.1 - Datasets: 2.12.0 - Tokenizers: 0.14.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "sentence-transformers/paraphrase-MiniLM-L3-v2", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [], "inference": true}
task
[ "TEXT_CLASSIFICATION" ]
46,082
leeju/08-3-4-distilbert-base-uncased-finetuned-clinc
leeju
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-01-09T02:14:27Z
2023-01-09T04:12:02+00:00
15
0
--- datasets: - clinc_oos license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: 08-3-4-distilbert-base-uncased-finetuned-clinc results: - task: type: text-classification name: Text Classification dataset: name: clinc_oos type: clinc_oos config: plus split: train args: plus metrics: - type: accuracy value: 0.9151612903225806 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 08-3-4-distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7777 - Accuracy: 0.9152 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 318 | 3.3018 | 0.7439 | | 3.7971 | 2.0 | 636 | 1.8880 | 0.8406 | | 3.7971 | 3.0 | 954 | 1.1649 | 0.8932 | | 1.7002 | 4.0 | 1272 | 0.8611 | 0.9119 | | 0.9041 | 5.0 | 1590 | 0.7777 | 0.9152 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.1 - Datasets 2.8.0 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 08-3-4-distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7777 - Accuracy: 0.9152 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 318 | 3.3018 | 0.7439 | | 3.7971 | 2.0 | 636 | 1.8880 | 0.8406 | | 3.7971 | 3.0 | 954 | 1.1649 | 0.8932 | | 1.7002 | 4.0 | 1272 | 0.8611 | 0.9119 | | 0.9041 | 5.0 | 1590 | 0.7777 | 0.9152 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.1 - Datasets 2.8.0 - Tokenizers 0.13.2
{"datasets": ["clinc_oos"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "08-3-4-distilbert-base-uncased-finetuned-clinc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos", "config": "plus", "split": "train", "args": "plus"}, "metrics": [{"type": "accuracy", "value": 0.9151612903225806, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,083
MBZUAI-Paris/Atlas-Chat-2B
MBZUAI-Paris
text-generation
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "ar", "ary", "dataset:MBZUAI-Paris/Darija-SFT-Mixture", "arxiv:2308.16149", "arxiv:2409.17912", "base_model:google/gemma-2-2b-it", "base_model:finetune:google/gemma-2-2b-it", "license:gemma", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-09-16T10:11:56Z
2025-03-28T14:56:29+00:00
1,175
23
--- base_model: - google/gemma-2-2b-it datasets: - MBZUAI-Paris/Darija-SFT-Mixture language: - ar - ary library_name: transformers license: gemma pipeline_tag: text-generation tags: - conversational extra_gated_button_content: Acknowledge license --- # JAIS Intiative: Atlas-Chat Models ## Model Overview Atlas-Chat is a family of open models instruction-tuned for Darija, the colloquial Arabic of Morocco, developed as part of the [Jais](https://arxiv.org/abs/2308.16149) project for standard Arabic and its extentions to dialectal Arabic. These models are designed for language generation and excel in various applications such as question answering, summarization, and translation. Thanks to their compact size, Atlas-Chat models can be deployed in resource-constrained environments like laptops, desktops, or personal cloud setups, making advanced AI accessible to Darija speakers and promoting widespread innovation. Three sizes are available: * [Atlas-Chat-2B](https://huggingface.co/MBZUAI-Paris/Atlas-Chat-2B): A small-sized version with 2 billion parameters, capable of generating fluent Moroccan Darija text while maintaining efficiency. * [Atlas-Chat-9B](https://huggingface.co/MBZUAI-Paris/Atlas-Chat-9B): A medium-sized with 9 billion parameters, providing more nuanced, contextually rich language generation for complex tasks. * [Atlas-Chat-27B](https://huggingface.co/MBZUAI-Paris/Atlas-Chat-27B): A large-sized version with 27 billion parameters, offering even more advanced capabilities for complex tasks and nuanced language generation compared to the 2B and 9B versions. The models are designed to assist with: * Conversational agents and chatbots that operate in Darija. * Translation, summarization, and content generation in informal dialect. * Cultural research related to Morocco and its language. **Paper:** [Atlas-Chat: Adapting Large Language Models for Low-Resource Moroccan Arabic Dialect](https://arxiv.org/abs/2409.17912) ## 👥 Our Team The model is developed by MBZUAI France Lab, an AI research center in Paris affiliated with the [Mohamed bin Zayed University of Artificial Intelligence (MBZUAI)](https://mbzuai.ac.ae/) headquartered in Abu Dhabi. ## Usage Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with: ```sh pip install -U transformers sentencepiece ``` Then, copy the snippet from the section that is relevant for your use case. #### Running with the `pipeline` API ```python import torch from transformers import pipeline pipe = pipeline( "text-generation", model="MBZUAI-Paris/Atlas-Chat-2B", model_kwargs={"torch_dtype": torch.bfloat16}, device="cuda" # replace with "mps" to run on a Mac device ) messages = [ {"role": "user", "content": 'شكون لي صنعك؟'}, ] outputs = pipe(messages, max_new_tokens=256, temperature=0.0) assistant_response = outputs[0]["generated_text"][-1]["content"].strip() print(assistant_response) ``` - Response: >قادّوني الباحثين والمهندسين ديال جامعة محمد بن زايد للذكاء الاصطناعي. جامعة محمد بن زايد للذكاء الاصطناعي هي جامعة ديال البحت والدراسات العليا، كتّخصّص فتعزيز الذكاء الاصطناعي والاستعمال ديالو لمصلحة الإنسانية. يمكن ليك تزور https://mbzuai.ac.ae/ar/about/ باش تعرف كثر على جامعة محمد بن زايد للذكاء الاصطناعي والمهمة ديالها! #### Running the model on a single / multi GPU ```sh pip install accelerate ``` ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "MBZUAI-Paris/Atlas-Chat-2B" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, device_map="auto", torch_dtype=torch.bfloat16, ) messages = [ {"role": "user", "content": "شنو كيتسمى المنتخب المغربي ؟"}, ] input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True, , add_generation_prompt=True) outputs = model.generate(**input_ids, max_new_tokens=256) print(tokenizer.decode(outputs[0])) ``` - Response: >المنتخب المغربي كيتسمى "أسود الاطلس". #### Quantized Versions through `bitsandbytes` <details> <summary> Using 8-bit precision (int8) </summary> ```sh pip install bitsandbytes accelerate ``` ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig model_id = "MBZUAI-Paris/Atlas-Chat-2B" quantization_config = BitsAndBytesConfig(load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, quantization_config=quantization_config, ) text = f""" شرح ليا هاد الهضرة: في القرن 19 لقاو الذّهب في كاليفورنيا، ناضو لّي كيبيعو العتلة والفاس كيقنعو الناس بلي غيديرو لاباس يلا قلبو على الذهب... فالأخير اغتنى تجار أدوات التنقيب والحفر. وحاليا كاين لّي كيقنع الأخرين بلي هو مليونير، وعندو الوقت يورّي للآخرين كيفاش يديرو لاباس. """ messages = [ {"role": "user", "content": text}, ] input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda") outputs = model.generate(**input_ids, max_new_tokens=256) print(tokenizer.decode(outputs[0]).split("<start_of_turn>model")[-1]) ``` - Response: >ف القرن 19، لقاو الذهب ف كاليفورنيا، وهاد الشي جاب بزاف ديال الناس باش يمشيو ليه. هاد الناس كانو كيبيعو العتلة والفاس وكيتظاهرو بلي إلا قلبو على الذهب غادي يلقاو ليه. ف الآخر، هاد التجار ديال التنقيب والحفر كانو كيتغلبو على الناس اللي بغاو يقلبو على الذهب. > >دابا، كاينين ناس اللي كيتظاهرو بلي هوما مليونيرين وكيتظاهرو بلي عندهم الوقت يورّيو للناس كيفاش يلقاو الذهب. هاد الناس كيتظاهرو بلي عندهم الخبرة والخبرة باش يلقاو الذهب، ولكن ف الحقيقة، هاد الشي ماشي صحيح. </details> <details> <summary> Using 4-bit precision </summary> ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig model_id = "MBZUAI-Paris/Atlas-Chat-2B" quantization_config = BitsAndBytesConfig(load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, quantization_config=quantization_config, ) text = f"""ترجم للدارجة: Atlas Chat is the first open source large language model that talks in Darija. """ messages = [ {"role": "user", "content": text}, ] input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True, add_generation_prompt=True) outputs = model.generate(**input_ids, max_new_tokens=256, temperature=0.0) print(tokenizer.decode(outputs[0]).split("<start_of_turn>model")[-1]) ``` - Response: >إن أطلاس شات هو أول نموذج لغة كبير مفتوح المصدر كيهضر بالدارجة. </details> ### Chat Template The models use a chat template that must be adhered to conversational use. The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet. Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction: ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model_id = "MBZUAI-Paris/Atlas-Chat-2B" dtype = torch.bfloat16 tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, device_map="cuda", torch_dtype=dtype,) chat = [ { "role": "user", "content": "اشنو هو الطاجين ؟"}, ] prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) ``` At this point, the prompt contains the following text: ``` <bos><start_of_turn>user اشنو هو الطاجين ؟<end_of_turn> <start_of_turn>model ``` As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity (either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with the `<end_of_turn>` token. You can follow this format to build the prompt manually, if you need to do it without the tokenizer's chat template. After the prompt is ready, generation can be performed like this: ```python inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512) print(tokenizer.decode(outputs[0])) ``` - Response: >الطاجين هو طبق تقليدي مغربي كيتصاوب من اللحم ولا الدجاج ولا الخضرة، مع الخضرة، والبهارات، والصلصة. كيتطيب فالمقلاة ولا فالمقلى على نار هادية لمدة طويلة، وهاد الشي كيخلي اللحم يطيب بشوية ويبدا يذوب. الطاجين معروف بعمق النكهة ديالو والريحة ديالو، وغالبا كيتقدم مع الرز ولا الخبز. ### Inputs and outputs * **Input:** Text string, such as a question, a prompt, or a document to be summarized. * **Output:** Generated Darija text in response to the input, such as an answer to a question, or a summary of a document. ### Chatbot interface using Ollama You can also use Ollama and chatbot-ollama to create a chatbot user-interface to better test the model. First you need to install Ollama on your machine from [here](https://github.com/ollama/ollama) and have node.js installed as well. Then, download and prepare the model as follows: ```bash huggingface-cli download MBZUAI-Paris/Atlas-Chat-2B --local-dir Atlas-Chat-2B/ ollama create Atlas-Chat-2B -f Atlas-Chat-2B/modelfile ollama serve ``` Finally, in a new terminal clone chatbot-ollama repository from Github and run it: ```bash git clone https://github.com/ivanfioravanti/chatbot-ollama.git cd chatbot-ollama npm ci npm run dev ``` You can start chatting with the model by visiting http://localhost:3000. ### Citation If you use Atlas-Chat in your research, please cite our paper: ```none @article{shang2024atlaschatadaptinglargelanguage, title={Atlas-Chat: Adapting Large Language Models for Low-Resource Moroccan Arabic Dialect}, author={Guokan Shang and Hadi Abdine and Yousef Khoubrane and Amr Mohamed and Yassine Abbahaddou and Sofiane Ennadir and Imane Momayiz and Xuguang Ren and Eric Moulines and Preslav Nakov and Michalis Vazirgiannis and Eric Xing}, year={2024}, eprint={2409.17912}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2409.17912}, } ``` ## Training Data The model was trained on diverse datasets focusing on Darija consisting for approximatley 450k instructions of a maximum length of 2048 tokens, including: * Synthetic instructions created to guide the model in processing various types of language tasks tailord towards Moroccan culture. * Instruction samples created from publicly available Moroccan Arabic datasets including translation, summarization and sentiment analysis. * Translated English and multi-lingual instruction-tuning datasets. Our training dataset [Darija-SFT-Mixture](https://huggingface.co/datasets/MBZUAI-Paris/Darija-SFT-Mixture) is publicly available. ## Implementation Information Atlas-Chat models are based on Gemma 2 models. The Atlas-Chat models were trained using 8 Nvidia's A100 80 GB GPUs in parallel using FSDP on AWS Sagemaker. The model is trained using HuggingFace transformers and parameter-efficient fine-tuning with LoRA rank of 256. ## Evaluation The Atlas-Chat models were evaluated on a comprehensive suite of tasks using various datasets and benchmarks to assess their performance across multiple dimensions. These included tasks such as: * **DarijaMMLU:** A Darija version of ArabicMMLU and MMLU benchmarks translated from MSA and English respectively. * **DarijaHellaSwag:** A Darija version of HellaSwag. * **Belebele Ary_Arab:** Belebele is a multiple-choice machine reading comprehension dataset published by Facebook spanning 122 language variants. The Evaluation is done on the Ary_Arab part of Belebele that refers to Darija. * **DarijaAlpacaEval:** A Darija version of AlpacaEval translated to Darija and adapted to the Moroccan culture. * **Sentiment Analysis.** * **Translation:** Including six directions and four languages: Darija, MSA, English and French. * **Transliteration:** Transforming a sentence from Darija (written in Arabic characters) to Arabizi (Written in Latin characters) and vice-versa. * **Summarization.** The models were compared against a collection of existing open-source Arabic models to gauge their effectiveness, with a particular focus on performance in Darija. All scores are based on zero-shot performance. The prompts are written mainly in Darija. The metric used for DarijaMMLU, DarijaHellaSwag, Belebele Ary and Sentiment Analysis is the normalized accuracy. We used [Language Model Evaluation Harness](https://github.com/MBZUAI-Paris/lm-evaluation-harness-atlas-chat) to conduct these evaluations. **LLMs Benchmarks:** <table> <tr> <td>Model</td> <td><a href="https://huggingface.co/datasets/MBZUAI-Paris/DarijaMMLU" target="_blank">DarijaMMLU</a></td> <td><a href="MBZUAI-Paris/DarijaHellaSwag" target="_blank">DarijaHellaSwag</a></td> <td ><a href="https://huggingface.co/datasets/facebook/belebele/viewer/ary_Arab" target="_blank">Belebele Ary</a></td> <td ><a href="https://huggingface.co/datasets/MBZUAI-Paris/DarijaAlpacaEval" target="_blank">DarijaAlpacaEval</a></td> </tr> <tr> <td><a href="https://huggingface.co/inceptionai/jais-family-1p3b-chat" target="_blank">jais-family-1p3b-chat</a></td> <td>35.39</td> <td>27.71</td> <td>38.33</td> <td>35.56</td> </tr> <tr> <td><a href="https://huggingface.co/inceptionai/jais-family-2p7b-chat" target="_blank">jais-family-2p7b-chat</a></td> <td>37.44</td> <td>29.10</td> <td>44.11</td> <td>52.97</td> </tr> <tr> <td><a href="https://huggingface.co/google/gemma-2-2b-it" target="_blank">gemma-2-2b-it</a></td> <td>28.58</td> <td>32.42</td> <td>25.22</td> <td>58.67</td> </tr> <tr> <td><a href="meta-llama/Llama-3.2-1B-Instruct" target="_blank">Llama-3.2-1B-Instruct</a></td> <td>27.66</td> <td>26.88</td> <td>28.89</td> <td>23.57</td> </tr> <tr> <td><a href="meta-llama/Llama-3.2-3B-Instruct" target="_blank">Llama-3.2-3B-Instruct</a></td> <td>32.60</td> <td>28.33</td> <td>38.00</td> <td>47.62</td> </tr> <tr> <td><strong><a href="https://huggingface.co/MBZUAI-Paris/Atlas-Chat-2B" target="_blank">Atlas-Chat-2B</a></strong></td> <td><b>44.97</b></td> <td><b>35.08</b></td> <td><b>53.89</b></td> <td><b>92.31</b></td> </tr> <tr style="border-top: 4px solid;"></tr> <tr> <td><a href="https://huggingface.co/inceptionai/jais-family-6p7b-chat" target="_blank">jais-family-6p7b-chat</a></td> <td>39.96</td> <td>32.64</td> <td>51.22</td> <td>65.18</td> </tr> <tr> <td><a href="https://huggingface.co/inceptionai/jais-adapted-7b-chat" target="_blank">jais-adapted-7b-chat</a></td> <td>39.30</td> <td>29.55</td> <td>43.67</td> <td>61.84</td> </tr> <tr> <td><a href="https://huggingface.co/inceptionai/jais-family-13b-chat" target="_blank">jais-family-13b-chat</a></td> <td>45.11</td> <td>33.98</td> <td>58.67</td> <td>69.93</td> </tr> <tr> <td><a href="https://huggingface.co/inceptionai/jais-adapted-13b-chat" target="_blank">jais-adapted-13b-chat</a></td> <td>45.20</td> <td>32.84</td> <td>49.67</td> <td>77.52</td> </tr> <tr> <td><a href="https://huggingface.co/FreedomIntelligence/AceGPT-7B-chat" target="_blank">AceGPT-7b-chat</a></td> <td>35.98</td> <td>30.33</td> <td>30.11</td> <td>47.31</td> </tr> <tr> <td><a href="https://huggingface.co/FreedomIntelligence/AceGPT-13B-chat" target="_blank">AceGPT-13b-chat</a></td> <td>41.09</td> <td>38.35</td> <td>33.11</td> <td>52.79</td> </tr> <tr> <td><a href="https://huggingface.co/google/gemma-2-9b-it" target="_blank">gemma-2-9b-it</a></td> <td>35.91</td> <td>32.19</td> <td>31.00</td> <td>90.86</td> </tr> <tr> <td><a href="meta-llama/Meta-Llama-3.1-8B-Instruct" target="_blank">Llama-3.1-8B-Instruct</a></td> <td>44.13</td> <td>31.40</td> <td>47.00</td> <td>78.08</td> </tr> <tr> <td><strong><a href="https://huggingface.co/MBZUAI-Paris/Atlas-Chat-9B" target="_blank">Atlas-Chat-9B</a></strong></td> <td><b>58.23</b></td> <td><b>43.65</b></td> <td><b>74.56</b></td> <td><b>95.62</b></td> </tr> <tr style="border-top: 4px solid;"></tr> <tr> <td><a href="https://huggingface.co/inceptionai/jais-family-30b-8k-chat" target="_blank">jais-family-30b-8k-chat</a></td> <td>51.88</td> <td>35.61</td> <td>65.67</td> <td>24.64</td> </tr> <tr> <td><a href="https://huggingface.co/google/gemma-2-27b-it" target="_blank">gemma-2-27b-it</a></td> <td>36.47</td> <td>37.04</td> <td>35.78</td> <td>95.07</td> </tr> <tr> <td><strong><a href="https://huggingface.co/MBZUAI-Paris/Atlas-Chat-27B" target="_blank">Atlas-Chat-27B</a></strong></td> <td><b>61.95</b></td> <td><b>48.37</b></td> <td><b>75.67</b></td> <td><b>96.58</b></td> </tr> </table> **Standard NLP Tasks:** <table> <tr> <td rowspan="2">Model</td> <td colspan="2"><a href="https://huggingface.co/datasets/MBZUAI-Paris/DarijaBench" target="_blank">DODa-10k (Translation)</a></td> <td colspan="2"><a href="https://huggingface.co/datasets/MBZUAI-Paris/DarijaBench" target="_blank">MADAR (Translation)</a></td> <td colspan="2"><a href="https://huggingface.co/datasets/MBZUAI-Paris/DarijaBench" target="_blank">FLORES+ (Translation)</a></td> <td colspan="2"><a href="https://huggingface.co/datasets/MBZUAI-Paris/DarijaBench" target="_blank">NLLB-Seed (Translation)</a></td> <td colspan="2"><a href="https://huggingface.co/datasets/MBZUAI-Paris/DarijaBench" target="_blank">DODa-10k (Transliteration)</a></td> <td rowspan="2"><a href="https://huggingface.co/datasets/MBZUAI-Paris/DarijaBench" target="_blank">MArSum (Summarization)</a><br/>(LLM as a judge)</td> <td rowspan="2"><a href="https://huggingface.co/datasets/MBZUAI-Paris/DarijaBench" target="_blank">Sentiment Analysis</a></td> </tr> <tr> <td>BLEU</td> <td>chrF</td> <td>BLEU</td> <td>chrF</td> <td>BLEU</td> <td>chrF</td> <td>BLEU</td> <td>chrF</td> <td>BLEU</td> <td>chrF</td> </tr> <tr> <td><a href="https://huggingface.co/inceptionai/jais-family-1p3b-chat" target="_blank">jais-family-1p3b-chat</a></td> <td>00.13</td> <td>06.18</td> <td>00.50</td> <td>15.43</td> <td>02.44</td> <td>19.14</td> <td>01.99</td> <td>12.60</td> <td>00.01</td> <td>03.01</td> <td>00.50</td> <td>45.29</td> </tr> <tr> <td><a href="https://huggingface.co/inceptionai/jais-family-2p7b-chat" target="_blank">jais-family-2p7b-chat</a></td> <td>00.25</td> <td>07.46</td> <td>00.62</td> <td>16.36</td> <td>04.25</td> <td>18.22</td> <td>03.10</td> <td>08.19</td> <td>00.01</td> <td>03.27</td> <td>00.90</td> <td>51.56</td> </tr> <tr> <td><a href="https://huggingface.co/google/gemma-2-2b-it" target="_blank">gemma-2-2b-it</a></td> <td>00.10</td> <td>04.96</td> <td>00.12</td> <td>06.66</td> <td>01.55</td> <td>18.59</td> <td>02.78</td> <td>23.69</td> <td>00.01</td> <td>02.08</td> <td>06.80</td> <td>53.36</td> </tr> <tr> <td><a href="meta-llama/Llama-3.2-1B-Instruct" target="_blank">Llama-3.2-1B-Instruct</a></td> <td>00.07</td> <td>05.95</td> <td>00.80</td> <td>18.71</td> <td>04.53</td> <td>18.39</td> <td>04.52</td> <td>17.06</td> <td>00.02</td> <td>03.74</td> <td>08.23</td> <td>46.27</td> </tr> <tr> <td><a href="meta-llama/Llama-3.2-3B-Instruct" target="_blank">Llama-3.2-3B-Instruct</a></td> <td>00.62</td> <td>13.67</td> <td>01.18</td> <td>22.12</td> <td>08.59</td> <td>35.21</td> <td>13.75</td> <td>43.63</td> <td>00.21</td> <td>09.68</td> <td>08.23</td> <td>49.20</td> </tr> <tr> <td><strong><a href="https://huggingface.co/MBZUAI-Paris/Atlas-Chat-2B" target="_blank">Atlas-Chat-2B</a></strong></td> <td><b>22.76</td> <td><b>44.86</td> <td><b>16.67</td> <td><b>41.64</td> <td><b>14.92</td> <td><b>43.03</td> <td><b>23.88</td> <td><b>52.19</td> <td><b>08.18</td> <td><b>21.54</td> <td><b>55.22</td> <td><b>73.99</td> </tr> <tr style="border-top: 4px solid;"></tr> <tr> <td><a href="https://huggingface.co/inceptionai/jais-family-6p7b-chat" target="_blank">jais-family-6p7b-chat</a></td> <td>00.73</td> <td>11.85</td> <td>01.88</td> <td>23.22</td> <td>04.25</td> <td>18.22</td> <td>04.62</td> <td>20.22</td> <td>00.02</td> <td>03.79</td> <td>03.02</td> <td>56.78</td> </tr> <tr> <td><a href="https://huggingface.co/inceptionai/jais-adapted-7b-chat" target="_blank">jais-adapted-7b-chat</a></td> <td>00.60</td> <td>09.43</td> <td>03.45</td> <td>25.88</td> <td>07.25</td> <td>23.21</td> <td>01.25</td> <td>02.22</td> <td>00.04</td> <td>03.24</td> <td>02.82</td> <td>52.72</td> </tr> <tr> <td><a href="https://huggingface.co/inceptionai/jais-family-13b-chat" target="_blank">jais-family-13b-chat</a></td> <td>00.92</td> <td>11.71</td> <td>04.01</td> <td>28.48</td> <td>05.70</td> <td>27.24</td> <td>04.50</td> <td>22.56</td> <td>00.03</td> <td>03.57</td> <td>01.77</td> <td>41.73</td> </tr> <tr> <td><a href="https://huggingface.co/inceptionai/jais-adapted-13b-chat" target="_blank">jais-adapted-13b-chat</a></td> <td>00.87</td> <td>10.52</td> <td>04.02</td> <td>25.29</td> <td>06.66</td> <td>23.46</td> <td>20.14</td> <td>47.87</td> <td>0.04</td> <td>04.77</td> <td>01.92</td> <td>66.68</td> </tr> <tr> <td><a href="https://huggingface.co/FreedomIntelligence/AceGPT-7B-chat" target="_blank">AceGPT-7b-chat</a></td> <td>00.44</td> <td>11.33</td> <td>01.05</td> <td>19.24</td> <td>06.92</td> <td>36.03</td> <td>11.05</td> <td>44.55</td> <td>00.06</td> <td>04.74</td> <td>02.28</td> <td>40.23</td> </tr> <tr> <td><a href="https://huggingface.co/FreedomIntelligence/AceGPT-13B-chat" target="_blank">AceGPT-13b-chat</a></td> <td>00.98</td> <td>16.70</td> <td>00.81</td> <td>20.23</td> <td>08.73</td> <td>40.76</td> <td>14.02</td> <td>48.28</td> <td>00.12</td> <td>06.32</td> <td>02.80</td> <td>59.58</td> </tr> <tr> <td><a href="https://huggingface.co/google/gemma-2-9b-it" target="_blank">gemma-2-9b-it</a></td> <td>03.10</td> <td>19.16</td> <td>01.72</td> <td>24.35</td> <td>05.18</td> <td>36.96</td> <td>08.23</td> <td>43.57</td> <td>00.17</td> <td>09.14</td> <td>13.81</td> <td>59.87</td> </tr> <tr> <td><a href="meta-llama/Meta-Llama-3.1-8B-Instruct" target="_blank">Llama-3.1-8B-Instruct</a></td> <td>00.92</td> <td>14.19</td> <td>01.46</td> <td>23.82</td> <td>08.89</td> <td>33.08</td> <td>11.85</td> <td>35.51</td> <td>00.11</td> <td>06.02</td> <td>16.14</td> <td>44.08</td> </tr> <tr> <td><strong><a href="https://huggingface.co/MBZUAI-Paris/Atlas-Chat-9B" target="_blank">Atlas-Chat-9B</a></strong></td> <td><b>28.08</td> <td><b>50.48</td> <td><b>18.16</td> <td><b>43.91</td> <td><b>18.63</td> <td><b>47.53</td> <td><b>29.98</td> <td><b>58.26</td> <td><b>22.08</td> <td><b>34.17</td> <td><b>59.76</td> <td><b>81.89</td> </tr> <tr style="border-top: 4px solid;"></tr> <tr> <td><a href="https://huggingface.co/inceptionai/jais-family-30b-8k-chat" target="_blank">jais-family-30b-8k-chat</a></td> <td>01.10</td> <td>14.40</td> <td>01.67</td> <td>23.37</td> <td>08.52</td> <td>35.41</td> <td>13.71</td> <td>41.33</td> <td>00.05</td> <td>04.48</td> <td>00.46</td> <td>56.73</td> </tr> <tr> <td><a href="https://huggingface.co/google/gemma-2-27b-it" target="_blank">gemma-2-27b-it</a></td> <td>00.67</td> <td>13.04</td> <td>01.74</td> <td>24.63</td> <td>05.17</td> <td>37.08</td> <td>07.36</td> <td>42.49</td> <td>00.03</td> <td>04.94</td> <td>11.10</td> <td>57.59</td> </tr> <tr> <td><strong><a href="https://huggingface.co/MBZUAI-Paris/Atlas-Chat-27B" target="_blank">Atlas-Chat-27B</a></strong></td> <td><b>29.55</td> <td><b>51.74</td> <td><b>19.66</td> <td><b>45.65</td> <td><b>20.34</td> <td><b>49.19</td> <td><b>31.61</td> <td><b>59.37</td> <td><b>33.03</td> <td><b>40.95</td> <td><b>60.70</td> <td>73.00</td> </tr> </table> ## Usage and Limitations These models have certain limitations that users should be aware of. <details> <summary>Intended Usage</summary> Open Large Language Models (LLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. * Content Creation and Communication * Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. * Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. * Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. * Research and Education * Natural Language Processing (NLP) Research: These models can serve as a foundation for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field. * Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. * Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. </details> <details> <summary>Limitations</summary> * Training Data * The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. * The scope of the training dataset determines the subject areas the model can handle effectively. * Context and Task Complexity * LLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). * Language Ambiguity and Nuance * Natural language is inherently complex. LLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * Factual Accuracy * LLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * Common Sense * LLMs rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. </details> <details> <summary> Ethical Considerations and Risks</summary> The development of large language models (LLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * LLMs trained on large-scale, real-world text data can reflect socio-cultural biases embedded in the training material. * Misinformation and Misuse * LLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit][rai-toolkit]. * Transparency and Accountability: * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making LLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * Privacy violations: Models were trained on data filtered for removal of PII (Personally Identifiable Information). Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. </details> ## Acknowledgement We would like to express our gratitude to the following institutions for their contributions to this work: École Polytechnique, LINAGORA and KTH Royal Institute of Technology. Additionally, we extend our thanks to the AtlasIA community.
null
Non_BioNLP
# JAIS Intiative: Atlas-Chat Models ## Model Overview Atlas-Chat is a family of open models instruction-tuned for Darija, the colloquial Arabic of Morocco, developed as part of the [Jais](https://arxiv.org/abs/2308.16149) project for standard Arabic and its extentions to dialectal Arabic. These models are designed for language generation and excel in various applications such as question answering, summarization, and translation. Thanks to their compact size, Atlas-Chat models can be deployed in resource-constrained environments like laptops, desktops, or personal cloud setups, making advanced AI accessible to Darija speakers and promoting widespread innovation. Three sizes are available: * [Atlas-Chat-2B](https://huggingface.co/MBZUAI-Paris/Atlas-Chat-2B): A small-sized version with 2 billion parameters, capable of generating fluent Moroccan Darija text while maintaining efficiency. * [Atlas-Chat-9B](https://huggingface.co/MBZUAI-Paris/Atlas-Chat-9B): A medium-sized with 9 billion parameters, providing more nuanced, contextually rich language generation for complex tasks. * [Atlas-Chat-27B](https://huggingface.co/MBZUAI-Paris/Atlas-Chat-27B): A large-sized version with 27 billion parameters, offering even more advanced capabilities for complex tasks and nuanced language generation compared to the 2B and 9B versions. The models are designed to assist with: * Conversational agents and chatbots that operate in Darija. * Translation, summarization, and content generation in informal dialect. * Cultural research related to Morocco and its language. **Paper:** [Atlas-Chat: Adapting Large Language Models for Low-Resource Moroccan Arabic Dialect](https://arxiv.org/abs/2409.17912) ## 👥 Our Team The model is developed by MBZUAI France Lab, an AI research center in Paris affiliated with the [Mohamed bin Zayed University of Artificial Intelligence (MBZUAI)](https://mbzuai.ac.ae/) headquartered in Abu Dhabi. ## Usage Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with: ```sh pip install -U transformers sentencepiece ``` Then, copy the snippet from the section that is relevant for your use case. #### Running with the `pipeline` API ```python import torch from transformers import pipeline pipe = pipeline( "text-generation", model="MBZUAI-Paris/Atlas-Chat-2B", model_kwargs={"torch_dtype": torch.bfloat16}, device="cuda" # replace with "mps" to run on a Mac device ) messages = [ {"role": "user", "content": 'شكون لي صنعك؟'}, ] outputs = pipe(messages, max_new_tokens=256, temperature=0.0) assistant_response = outputs[0]["generated_text"][-1]["content"].strip() print(assistant_response) ``` - Response: >قادّوني الباحثين والمهندسين ديال جامعة محمد بن زايد للذكاء الاصطناعي. جامعة محمد بن زايد للذكاء الاصطناعي هي جامعة ديال البحت والدراسات العليا، كتّخصّص فتعزيز الذكاء الاصطناعي والاستعمال ديالو لمصلحة الإنسانية. يمكن ليك تزور https://mbzuai.ac.ae/ar/about/ باش تعرف كثر على جامعة محمد بن زايد للذكاء الاصطناعي والمهمة ديالها! #### Running the model on a single / multi GPU ```sh pip install accelerate ``` ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "MBZUAI-Paris/Atlas-Chat-2B" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, device_map="auto", torch_dtype=torch.bfloat16, ) messages = [ {"role": "user", "content": "شنو كيتسمى المنتخب المغربي ؟"}, ] input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True, , add_generation_prompt=True) outputs = model.generate(**input_ids, max_new_tokens=256) print(tokenizer.decode(outputs[0])) ``` - Response: >المنتخب المغربي كيتسمى "أسود الاطلس". #### Quantized Versions through `bitsandbytes` <details> <summary> Using 8-bit precision (int8) </summary> ```sh pip install bitsandbytes accelerate ``` ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig model_id = "MBZUAI-Paris/Atlas-Chat-2B" quantization_config = BitsAndBytesConfig(load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, quantization_config=quantization_config, ) text = f""" شرح ليا هاد الهضرة: في القرن 19 لقاو الذّهب في كاليفورنيا، ناضو لّي كيبيعو العتلة والفاس كيقنعو الناس بلي غيديرو لاباس يلا قلبو على الذهب... فالأخير اغتنى تجار أدوات التنقيب والحفر. وحاليا كاين لّي كيقنع الأخرين بلي هو مليونير، وعندو الوقت يورّي للآخرين كيفاش يديرو لاباس. """ messages = [ {"role": "user", "content": text}, ] input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda") outputs = model.generate(**input_ids, max_new_tokens=256) print(tokenizer.decode(outputs[0]).split("<start_of_turn>model")[-1]) ``` - Response: >ف القرن 19، لقاو الذهب ف كاليفورنيا، وهاد الشي جاب بزاف ديال الناس باش يمشيو ليه. هاد الناس كانو كيبيعو العتلة والفاس وكيتظاهرو بلي إلا قلبو على الذهب غادي يلقاو ليه. ف الآخر، هاد التجار ديال التنقيب والحفر كانو كيتغلبو على الناس اللي بغاو يقلبو على الذهب. > >دابا، كاينين ناس اللي كيتظاهرو بلي هوما مليونيرين وكيتظاهرو بلي عندهم الوقت يورّيو للناس كيفاش يلقاو الذهب. هاد الناس كيتظاهرو بلي عندهم الخبرة والخبرة باش يلقاو الذهب، ولكن ف الحقيقة، هاد الشي ماشي صحيح. </details> <details> <summary> Using 4-bit precision </summary> ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig model_id = "MBZUAI-Paris/Atlas-Chat-2B" quantization_config = BitsAndBytesConfig(load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, quantization_config=quantization_config, ) text = f"""ترجم للدارجة: Atlas Chat is the first open source large language model that talks in Darija. """ messages = [ {"role": "user", "content": text}, ] input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True, add_generation_prompt=True) outputs = model.generate(**input_ids, max_new_tokens=256, temperature=0.0) print(tokenizer.decode(outputs[0]).split("<start_of_turn>model")[-1]) ``` - Response: >إن أطلاس شات هو أول نموذج لغة كبير مفتوح المصدر كيهضر بالدارجة. </details> ### Chat Template The models use a chat template that must be adhered to conversational use. The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet. Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction: ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model_id = "MBZUAI-Paris/Atlas-Chat-2B" dtype = torch.bfloat16 tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, device_map="cuda", torch_dtype=dtype,) chat = [ { "role": "user", "content": "اشنو هو الطاجين ؟"}, ] prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) ``` At this point, the prompt contains the following text: ``` <bos><start_of_turn>user اشنو هو الطاجين ؟<end_of_turn> <start_of_turn>model ``` As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity (either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with the `<end_of_turn>` token. You can follow this format to build the prompt manually, if you need to do it without the tokenizer's chat template. After the prompt is ready, generation can be performed like this: ```python inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512) print(tokenizer.decode(outputs[0])) ``` - Response: >الطاجين هو طبق تقليدي مغربي كيتصاوب من اللحم ولا الدجاج ولا الخضرة، مع الخضرة، والبهارات، والصلصة. كيتطيب فالمقلاة ولا فالمقلى على نار هادية لمدة طويلة، وهاد الشي كيخلي اللحم يطيب بشوية ويبدا يذوب. الطاجين معروف بعمق النكهة ديالو والريحة ديالو، وغالبا كيتقدم مع الرز ولا الخبز. ### Inputs and outputs * **Input:** Text string, such as a question, a prompt, or a document to be summarized. * **Output:** Generated Darija text in response to the input, such as an answer to a question, or a summary of a document. ### Chatbot interface using Ollama You can also use Ollama and chatbot-ollama to create a chatbot user-interface to better test the model. First you need to install Ollama on your machine from [here](https://github.com/ollama/ollama) and have node.js installed as well. Then, download and prepare the model as follows: ```bash huggingface-cli download MBZUAI-Paris/Atlas-Chat-2B --local-dir Atlas-Chat-2B/ ollama create Atlas-Chat-2B -f Atlas-Chat-2B/modelfile ollama serve ``` Finally, in a new terminal clone chatbot-ollama repository from Github and run it: ```bash git clone https://github.com/ivanfioravanti/chatbot-ollama.git cd chatbot-ollama npm ci npm run dev ``` You can start chatting with the model by visiting http://localhost:3000. ### Citation If you use Atlas-Chat in your research, please cite our paper: ```none @article{shang2024atlaschatadaptinglargelanguage, title={Atlas-Chat: Adapting Large Language Models for Low-Resource Moroccan Arabic Dialect}, author={Guokan Shang and Hadi Abdine and Yousef Khoubrane and Amr Mohamed and Yassine Abbahaddou and Sofiane Ennadir and Imane Momayiz and Xuguang Ren and Eric Moulines and Preslav Nakov and Michalis Vazirgiannis and Eric Xing}, year={2024}, eprint={2409.17912}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2409.17912}, } ``` ## Training Data The model was trained on diverse datasets focusing on Darija consisting for approximatley 450k instructions of a maximum length of 2048 tokens, including: * Synthetic instructions created to guide the model in processing various types of language tasks tailord towards Moroccan culture. * Instruction samples created from publicly available Moroccan Arabic datasets including translation, summarization and sentiment analysis. * Translated English and multi-lingual instruction-tuning datasets. Our training dataset [Darija-SFT-Mixture](https://huggingface.co/datasets/MBZUAI-Paris/Darija-SFT-Mixture) is publicly available. ## Implementation Information Atlas-Chat models are based on Gemma 2 models. The Atlas-Chat models were trained using 8 Nvidia's A100 80 GB GPUs in parallel using FSDP on AWS Sagemaker. The model is trained using HuggingFace transformers and parameter-efficient fine-tuning with LoRA rank of 256. ## Evaluation The Atlas-Chat models were evaluated on a comprehensive suite of tasks using various datasets and benchmarks to assess their performance across multiple dimensions. These included tasks such as: * **DarijaMMLU:** A Darija version of ArabicMMLU and MMLU benchmarks translated from MSA and English respectively. * **DarijaHellaSwag:** A Darija version of HellaSwag. * **Belebele Ary_Arab:** Belebele is a multiple-choice machine reading comprehension dataset published by Facebook spanning 122 language variants. The Evaluation is done on the Ary_Arab part of Belebele that refers to Darija. * **DarijaAlpacaEval:** A Darija version of AlpacaEval translated to Darija and adapted to the Moroccan culture. * **Sentiment Analysis.** * **Translation:** Including six directions and four languages: Darija, MSA, English and French. * **Transliteration:** Transforming a sentence from Darija (written in Arabic characters) to Arabizi (Written in Latin characters) and vice-versa. * **Summarization.** The models were compared against a collection of existing open-source Arabic models to gauge their effectiveness, with a particular focus on performance in Darija. All scores are based on zero-shot performance. The prompts are written mainly in Darija. The metric used for DarijaMMLU, DarijaHellaSwag, Belebele Ary and Sentiment Analysis is the normalized accuracy. We used [Language Model Evaluation Harness](https://github.com/MBZUAI-Paris/lm-evaluation-harness-atlas-chat) to conduct these evaluations. **LLMs Benchmarks:** <table> <tr> <td>Model</td> <td><a href="https://huggingface.co/datasets/MBZUAI-Paris/DarijaMMLU" target="_blank">DarijaMMLU</a></td> <td><a href="MBZUAI-Paris/DarijaHellaSwag" target="_blank">DarijaHellaSwag</a></td> <td ><a href="https://huggingface.co/datasets/facebook/belebele/viewer/ary_Arab" target="_blank">Belebele Ary</a></td> <td ><a href="https://huggingface.co/datasets/MBZUAI-Paris/DarijaAlpacaEval" target="_blank">DarijaAlpacaEval</a></td> </tr> <tr> <td><a href="https://huggingface.co/inceptionai/jais-family-1p3b-chat" target="_blank">jais-family-1p3b-chat</a></td> <td>35.39</td> <td>27.71</td> <td>38.33</td> <td>35.56</td> </tr> <tr> <td><a href="https://huggingface.co/inceptionai/jais-family-2p7b-chat" target="_blank">jais-family-2p7b-chat</a></td> <td>37.44</td> <td>29.10</td> <td>44.11</td> <td>52.97</td> </tr> <tr> <td><a href="https://huggingface.co/google/gemma-2-2b-it" target="_blank">gemma-2-2b-it</a></td> <td>28.58</td> <td>32.42</td> <td>25.22</td> <td>58.67</td> </tr> <tr> <td><a href="meta-llama/Llama-3.2-1B-Instruct" target="_blank">Llama-3.2-1B-Instruct</a></td> <td>27.66</td> <td>26.88</td> <td>28.89</td> <td>23.57</td> </tr> <tr> <td><a href="meta-llama/Llama-3.2-3B-Instruct" target="_blank">Llama-3.2-3B-Instruct</a></td> <td>32.60</td> <td>28.33</td> <td>38.00</td> <td>47.62</td> </tr> <tr> <td><strong><a href="https://huggingface.co/MBZUAI-Paris/Atlas-Chat-2B" target="_blank">Atlas-Chat-2B</a></strong></td> <td><b>44.97</b></td> <td><b>35.08</b></td> <td><b>53.89</b></td> <td><b>92.31</b></td> </tr> <tr style="border-top: 4px solid;"></tr> <tr> <td><a href="https://huggingface.co/inceptionai/jais-family-6p7b-chat" target="_blank">jais-family-6p7b-chat</a></td> <td>39.96</td> <td>32.64</td> <td>51.22</td> <td>65.18</td> </tr> <tr> <td><a href="https://huggingface.co/inceptionai/jais-adapted-7b-chat" target="_blank">jais-adapted-7b-chat</a></td> <td>39.30</td> <td>29.55</td> <td>43.67</td> <td>61.84</td> </tr> <tr> <td><a href="https://huggingface.co/inceptionai/jais-family-13b-chat" target="_blank">jais-family-13b-chat</a></td> <td>45.11</td> <td>33.98</td> <td>58.67</td> <td>69.93</td> </tr> <tr> <td><a href="https://huggingface.co/inceptionai/jais-adapted-13b-chat" target="_blank">jais-adapted-13b-chat</a></td> <td>45.20</td> <td>32.84</td> <td>49.67</td> <td>77.52</td> </tr> <tr> <td><a href="https://huggingface.co/FreedomIntelligence/AceGPT-7B-chat" target="_blank">AceGPT-7b-chat</a></td> <td>35.98</td> <td>30.33</td> <td>30.11</td> <td>47.31</td> </tr> <tr> <td><a href="https://huggingface.co/FreedomIntelligence/AceGPT-13B-chat" target="_blank">AceGPT-13b-chat</a></td> <td>41.09</td> <td>38.35</td> <td>33.11</td> <td>52.79</td> </tr> <tr> <td><a href="https://huggingface.co/google/gemma-2-9b-it" target="_blank">gemma-2-9b-it</a></td> <td>35.91</td> <td>32.19</td> <td>31.00</td> <td>90.86</td> </tr> <tr> <td><a href="meta-llama/Meta-Llama-3.1-8B-Instruct" target="_blank">Llama-3.1-8B-Instruct</a></td> <td>44.13</td> <td>31.40</td> <td>47.00</td> <td>78.08</td> </tr> <tr> <td><strong><a href="https://huggingface.co/MBZUAI-Paris/Atlas-Chat-9B" target="_blank">Atlas-Chat-9B</a></strong></td> <td><b>58.23</b></td> <td><b>43.65</b></td> <td><b>74.56</b></td> <td><b>95.62</b></td> </tr> <tr style="border-top: 4px solid;"></tr> <tr> <td><a href="https://huggingface.co/inceptionai/jais-family-30b-8k-chat" target="_blank">jais-family-30b-8k-chat</a></td> <td>51.88</td> <td>35.61</td> <td>65.67</td> <td>24.64</td> </tr> <tr> <td><a href="https://huggingface.co/google/gemma-2-27b-it" target="_blank">gemma-2-27b-it</a></td> <td>36.47</td> <td>37.04</td> <td>35.78</td> <td>95.07</td> </tr> <tr> <td><strong><a href="https://huggingface.co/MBZUAI-Paris/Atlas-Chat-27B" target="_blank">Atlas-Chat-27B</a></strong></td> <td><b>61.95</b></td> <td><b>48.37</b></td> <td><b>75.67</b></td> <td><b>96.58</b></td> </tr> </table> **Standard NLP Tasks:** <table> <tr> <td rowspan="2">Model</td> <td colspan="2"><a href="https://huggingface.co/datasets/MBZUAI-Paris/DarijaBench" target="_blank">DODa-10k (Translation)</a></td> <td colspan="2"><a href="https://huggingface.co/datasets/MBZUAI-Paris/DarijaBench" target="_blank">MADAR (Translation)</a></td> <td colspan="2"><a href="https://huggingface.co/datasets/MBZUAI-Paris/DarijaBench" target="_blank">FLORES+ (Translation)</a></td> <td colspan="2"><a href="https://huggingface.co/datasets/MBZUAI-Paris/DarijaBench" target="_blank">NLLB-Seed (Translation)</a></td> <td colspan="2"><a href="https://huggingface.co/datasets/MBZUAI-Paris/DarijaBench" target="_blank">DODa-10k (Transliteration)</a></td> <td rowspan="2"><a href="https://huggingface.co/datasets/MBZUAI-Paris/DarijaBench" target="_blank">MArSum (Summarization)</a><br/>(LLM as a judge)</td> <td rowspan="2"><a href="https://huggingface.co/datasets/MBZUAI-Paris/DarijaBench" target="_blank">Sentiment Analysis</a></td> </tr> <tr> <td>BLEU</td> <td>chrF</td> <td>BLEU</td> <td>chrF</td> <td>BLEU</td> <td>chrF</td> <td>BLEU</td> <td>chrF</td> <td>BLEU</td> <td>chrF</td> </tr> <tr> <td><a href="https://huggingface.co/inceptionai/jais-family-1p3b-chat" target="_blank">jais-family-1p3b-chat</a></td> <td>00.13</td> <td>06.18</td> <td>00.50</td> <td>15.43</td> <td>02.44</td> <td>19.14</td> <td>01.99</td> <td>12.60</td> <td>00.01</td> <td>03.01</td> <td>00.50</td> <td>45.29</td> </tr> <tr> <td><a href="https://huggingface.co/inceptionai/jais-family-2p7b-chat" target="_blank">jais-family-2p7b-chat</a></td> <td>00.25</td> <td>07.46</td> <td>00.62</td> <td>16.36</td> <td>04.25</td> <td>18.22</td> <td>03.10</td> <td>08.19</td> <td>00.01</td> <td>03.27</td> <td>00.90</td> <td>51.56</td> </tr> <tr> <td><a href="https://huggingface.co/google/gemma-2-2b-it" target="_blank">gemma-2-2b-it</a></td> <td>00.10</td> <td>04.96</td> <td>00.12</td> <td>06.66</td> <td>01.55</td> <td>18.59</td> <td>02.78</td> <td>23.69</td> <td>00.01</td> <td>02.08</td> <td>06.80</td> <td>53.36</td> </tr> <tr> <td><a href="meta-llama/Llama-3.2-1B-Instruct" target="_blank">Llama-3.2-1B-Instruct</a></td> <td>00.07</td> <td>05.95</td> <td>00.80</td> <td>18.71</td> <td>04.53</td> <td>18.39</td> <td>04.52</td> <td>17.06</td> <td>00.02</td> <td>03.74</td> <td>08.23</td> <td>46.27</td> </tr> <tr> <td><a href="meta-llama/Llama-3.2-3B-Instruct" target="_blank">Llama-3.2-3B-Instruct</a></td> <td>00.62</td> <td>13.67</td> <td>01.18</td> <td>22.12</td> <td>08.59</td> <td>35.21</td> <td>13.75</td> <td>43.63</td> <td>00.21</td> <td>09.68</td> <td>08.23</td> <td>49.20</td> </tr> <tr> <td><strong><a href="https://huggingface.co/MBZUAI-Paris/Atlas-Chat-2B" target="_blank">Atlas-Chat-2B</a></strong></td> <td><b>22.76</td> <td><b>44.86</td> <td><b>16.67</td> <td><b>41.64</td> <td><b>14.92</td> <td><b>43.03</td> <td><b>23.88</td> <td><b>52.19</td> <td><b>08.18</td> <td><b>21.54</td> <td><b>55.22</td> <td><b>73.99</td> </tr> <tr style="border-top: 4px solid;"></tr> <tr> <td><a href="https://huggingface.co/inceptionai/jais-family-6p7b-chat" target="_blank">jais-family-6p7b-chat</a></td> <td>00.73</td> <td>11.85</td> <td>01.88</td> <td>23.22</td> <td>04.25</td> <td>18.22</td> <td>04.62</td> <td>20.22</td> <td>00.02</td> <td>03.79</td> <td>03.02</td> <td>56.78</td> </tr> <tr> <td><a href="https://huggingface.co/inceptionai/jais-adapted-7b-chat" target="_blank">jais-adapted-7b-chat</a></td> <td>00.60</td> <td>09.43</td> <td>03.45</td> <td>25.88</td> <td>07.25</td> <td>23.21</td> <td>01.25</td> <td>02.22</td> <td>00.04</td> <td>03.24</td> <td>02.82</td> <td>52.72</td> </tr> <tr> <td><a href="https://huggingface.co/inceptionai/jais-family-13b-chat" target="_blank">jais-family-13b-chat</a></td> <td>00.92</td> <td>11.71</td> <td>04.01</td> <td>28.48</td> <td>05.70</td> <td>27.24</td> <td>04.50</td> <td>22.56</td> <td>00.03</td> <td>03.57</td> <td>01.77</td> <td>41.73</td> </tr> <tr> <td><a href="https://huggingface.co/inceptionai/jais-adapted-13b-chat" target="_blank">jais-adapted-13b-chat</a></td> <td>00.87</td> <td>10.52</td> <td>04.02</td> <td>25.29</td> <td>06.66</td> <td>23.46</td> <td>20.14</td> <td>47.87</td> <td>0.04</td> <td>04.77</td> <td>01.92</td> <td>66.68</td> </tr> <tr> <td><a href="https://huggingface.co/FreedomIntelligence/AceGPT-7B-chat" target="_blank">AceGPT-7b-chat</a></td> <td>00.44</td> <td>11.33</td> <td>01.05</td> <td>19.24</td> <td>06.92</td> <td>36.03</td> <td>11.05</td> <td>44.55</td> <td>00.06</td> <td>04.74</td> <td>02.28</td> <td>40.23</td> </tr> <tr> <td><a href="https://huggingface.co/FreedomIntelligence/AceGPT-13B-chat" target="_blank">AceGPT-13b-chat</a></td> <td>00.98</td> <td>16.70</td> <td>00.81</td> <td>20.23</td> <td>08.73</td> <td>40.76</td> <td>14.02</td> <td>48.28</td> <td>00.12</td> <td>06.32</td> <td>02.80</td> <td>59.58</td> </tr> <tr> <td><a href="https://huggingface.co/google/gemma-2-9b-it" target="_blank">gemma-2-9b-it</a></td> <td>03.10</td> <td>19.16</td> <td>01.72</td> <td>24.35</td> <td>05.18</td> <td>36.96</td> <td>08.23</td> <td>43.57</td> <td>00.17</td> <td>09.14</td> <td>13.81</td> <td>59.87</td> </tr> <tr> <td><a href="meta-llama/Meta-Llama-3.1-8B-Instruct" target="_blank">Llama-3.1-8B-Instruct</a></td> <td>00.92</td> <td>14.19</td> <td>01.46</td> <td>23.82</td> <td>08.89</td> <td>33.08</td> <td>11.85</td> <td>35.51</td> <td>00.11</td> <td>06.02</td> <td>16.14</td> <td>44.08</td> </tr> <tr> <td><strong><a href="https://huggingface.co/MBZUAI-Paris/Atlas-Chat-9B" target="_blank">Atlas-Chat-9B</a></strong></td> <td><b>28.08</td> <td><b>50.48</td> <td><b>18.16</td> <td><b>43.91</td> <td><b>18.63</td> <td><b>47.53</td> <td><b>29.98</td> <td><b>58.26</td> <td><b>22.08</td> <td><b>34.17</td> <td><b>59.76</td> <td><b>81.89</td> </tr> <tr style="border-top: 4px solid;"></tr> <tr> <td><a href="https://huggingface.co/inceptionai/jais-family-30b-8k-chat" target="_blank">jais-family-30b-8k-chat</a></td> <td>01.10</td> <td>14.40</td> <td>01.67</td> <td>23.37</td> <td>08.52</td> <td>35.41</td> <td>13.71</td> <td>41.33</td> <td>00.05</td> <td>04.48</td> <td>00.46</td> <td>56.73</td> </tr> <tr> <td><a href="https://huggingface.co/google/gemma-2-27b-it" target="_blank">gemma-2-27b-it</a></td> <td>00.67</td> <td>13.04</td> <td>01.74</td> <td>24.63</td> <td>05.17</td> <td>37.08</td> <td>07.36</td> <td>42.49</td> <td>00.03</td> <td>04.94</td> <td>11.10</td> <td>57.59</td> </tr> <tr> <td><strong><a href="https://huggingface.co/MBZUAI-Paris/Atlas-Chat-27B" target="_blank">Atlas-Chat-27B</a></strong></td> <td><b>29.55</td> <td><b>51.74</td> <td><b>19.66</td> <td><b>45.65</td> <td><b>20.34</td> <td><b>49.19</td> <td><b>31.61</td> <td><b>59.37</td> <td><b>33.03</td> <td><b>40.95</td> <td><b>60.70</td> <td>73.00</td> </tr> </table> ## Usage and Limitations These models have certain limitations that users should be aware of. <details> <summary>Intended Usage</summary> Open Large Language Models (LLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. * Content Creation and Communication * Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. * Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. * Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. * Research and Education * Natural Language Processing (NLP) Research: These models can serve as a foundation for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field. * Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. * Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. </details> <details> <summary>Limitations</summary> * Training Data * The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. * The scope of the training dataset determines the subject areas the model can handle effectively. * Context and Task Complexity * LLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). * Language Ambiguity and Nuance * Natural language is inherently complex. LLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * Factual Accuracy * LLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * Common Sense * LLMs rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. </details> <details> <summary> Ethical Considerations and Risks</summary> The development of large language models (LLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * LLMs trained on large-scale, real-world text data can reflect socio-cultural biases embedded in the training material. * Misinformation and Misuse * LLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit][rai-toolkit]. * Transparency and Accountability: * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making LLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * Privacy violations: Models were trained on data filtered for removal of PII (Personally Identifiable Information). Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. </details> ## Acknowledgement We would like to express our gratitude to the following institutions for their contributions to this work: École Polytechnique, LINAGORA and KTH Royal Institute of Technology. Additionally, we extend our thanks to the AtlasIA community.
{"base_model": ["google/gemma-2-2b-it"], "datasets": ["MBZUAI-Paris/Darija-SFT-Mixture"], "language": ["ar", "ary"], "library_name": "transformers", "license": "gemma", "pipeline_tag": "text-generation", "tags": ["conversational"], "extra_gated_button_content": "Acknowledge license"}
task
[ "QUESTION_ANSWERING", "TRANSLATION", "SUMMARIZATION" ]
46,084
henryscheible/eval_masked_102_cola
henryscheible
null
[ "pytorch", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "region:us" ]
2023-01-18T00:15:29Z
2023-01-18T00:21:14+00:00
0
0
--- datasets: - glue language: - en license: apache-2.0 metrics: - matthews_correlation tags: - generated_from_trainer model-index: - name: eval_masked_102_cola results: - task: type: text-classification name: Text Classification dataset: name: GLUE COLA type: glue args: cola metrics: - type: matthews_correlation value: 0.5988647643057969 name: Matthews Correlation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eval_masked_102_cola This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6601 - Matthews Correlation: 0.5989 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1 - Datasets 2.6.1 - Tokenizers 0.13.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eval_masked_102_cola This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6601 - Matthews Correlation: 0.5989 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1 - Datasets 2.6.1 - Tokenizers 0.13.1
{"datasets": ["glue"], "language": ["en"], "license": "apache-2.0", "metrics": ["matthews_correlation"], "tags": ["generated_from_trainer"], "model-index": [{"name": "eval_masked_102_cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE COLA", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5988647643057969, "name": "Matthews Correlation"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,085
Triangle104/Bellatrix-Tiny-1B-R1-Q5_K_S-GGUF
Triangle104
text-generation
[ "transformers", "gguf", "GRPO", "Reinforcement learning", "trl", "SFT", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:prithivMLmods/Bellatrix-Tiny-1B-R1", "base_model:quantized:prithivMLmods/Bellatrix-Tiny-1B-R1", "license:llama3.2", "endpoints_compatible", "region:us", "conversational" ]
2025-02-03T22:56:00Z
2025-02-03T22:56:59+00:00
9
0
--- base_model: prithivMLmods/Bellatrix-Tiny-1B-R1 language: - en library_name: transformers license: llama3.2 pipeline_tag: text-generation tags: - GRPO - Reinforcement learning - trl - SFT - llama-cpp - gguf-my-repo --- # Triangle104/Bellatrix-Tiny-1B-R1-Q5_K_S-GGUF This model was converted to GGUF format from [`prithivMLmods/Bellatrix-Tiny-1B-R1`](https://huggingface.co/prithivMLmods/Bellatrix-Tiny-1B-R1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/prithivMLmods/Bellatrix-Tiny-1B-R1) for more details on the model. --- Bellatrix is based on a reasoning-based model designed for the DeepSeek-R1 synthetic dataset entries. The pipeline's instruction-tuned, text-only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. These models outperform many of the available open-source options. Bellatrix is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions utilize supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF). Use with transformers Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function. Make sure to update your transformers installation via pip install --upgrade transformers. import torch from transformers import pipeline model_id = "prithivMLmods/Bellatrix-Tiny-1B-R1" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] outputs = pipe( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) Note: You can also find detailed recipes on how to use the model locally, with torch.compile(), assisted generations, quantised and more at huggingface-llama-recipes Intended Use Bellatrix is designed for applications that require advanced reasoning and multilingual dialogue capabilities. It is particularly suitable for: Agentic Retrieval: Enabling intelligent retrieval of relevant information in a dialogue or query-response system. Summarization Tasks: Condensing large bodies of text into concise summaries for easier comprehension. Multilingual Use Cases: Supporting conversations in multiple languages with high accuracy and coherence. Instruction-Based Applications: Following complex, context-aware instructions to generate precise outputs in a variety of scenarios. Limitations Despite its capabilities, Bellatrix has some limitations: Domain Specificity: While it performs well on general tasks, its performance may degrade with highly specialized or niche datasets. Dependence on Training Data: It is only as good as the quality and diversity of its training data, which may lead to biases or inaccuracies. Computational Resources: The model’s optimized transformer architecture can be resource-intensive, requiring significant computational power for fine-tuning and inference. Language Coverage: While multilingual, some languages or dialects may have limited support or lower performance compared to widely used ones. Real-World Contexts: It may struggle with understanding nuanced or ambiguous real-world scenarios not covered during training. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Bellatrix-Tiny-1B-R1-Q5_K_S-GGUF --hf-file bellatrix-tiny-1b-r1-q5_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Bellatrix-Tiny-1B-R1-Q5_K_S-GGUF --hf-file bellatrix-tiny-1b-r1-q5_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Bellatrix-Tiny-1B-R1-Q5_K_S-GGUF --hf-file bellatrix-tiny-1b-r1-q5_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Bellatrix-Tiny-1B-R1-Q5_K_S-GGUF --hf-file bellatrix-tiny-1b-r1-q5_k_s.gguf -c 2048 ```
null
Non_BioNLP
# Triangle104/Bellatrix-Tiny-1B-R1-Q5_K_S-GGUF This model was converted to GGUF format from [`prithivMLmods/Bellatrix-Tiny-1B-R1`](https://huggingface.co/prithivMLmods/Bellatrix-Tiny-1B-R1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/prithivMLmods/Bellatrix-Tiny-1B-R1) for more details on the model. --- Bellatrix is based on a reasoning-based model designed for the DeepSeek-R1 synthetic dataset entries. The pipeline's instruction-tuned, text-only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. These models outperform many of the available open-source options. Bellatrix is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions utilize supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF). Use with transformers Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function. Make sure to update your transformers installation via pip install --upgrade transformers. import torch from transformers import pipeline model_id = "prithivMLmods/Bellatrix-Tiny-1B-R1" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] outputs = pipe( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) Note: You can also find detailed recipes on how to use the model locally, with torch.compile(), assisted generations, quantised and more at huggingface-llama-recipes Intended Use Bellatrix is designed for applications that require advanced reasoning and multilingual dialogue capabilities. It is particularly suitable for: Agentic Retrieval: Enabling intelligent retrieval of relevant information in a dialogue or query-response system. Summarization Tasks: Condensing large bodies of text into concise summaries for easier comprehension. Multilingual Use Cases: Supporting conversations in multiple languages with high accuracy and coherence. Instruction-Based Applications: Following complex, context-aware instructions to generate precise outputs in a variety of scenarios. Limitations Despite its capabilities, Bellatrix has some limitations: Domain Specificity: While it performs well on general tasks, its performance may degrade with highly specialized or niche datasets. Dependence on Training Data: It is only as good as the quality and diversity of its training data, which may lead to biases or inaccuracies. Computational Resources: The model’s optimized transformer architecture can be resource-intensive, requiring significant computational power for fine-tuning and inference. Language Coverage: While multilingual, some languages or dialects may have limited support or lower performance compared to widely used ones. Real-World Contexts: It may struggle with understanding nuanced or ambiguous real-world scenarios not covered during training. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Bellatrix-Tiny-1B-R1-Q5_K_S-GGUF --hf-file bellatrix-tiny-1b-r1-q5_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Bellatrix-Tiny-1B-R1-Q5_K_S-GGUF --hf-file bellatrix-tiny-1b-r1-q5_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Bellatrix-Tiny-1B-R1-Q5_K_S-GGUF --hf-file bellatrix-tiny-1b-r1-q5_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Bellatrix-Tiny-1B-R1-Q5_K_S-GGUF --hf-file bellatrix-tiny-1b-r1-q5_k_s.gguf -c 2048 ```
{"base_model": "prithivMLmods/Bellatrix-Tiny-1B-R1", "language": ["en"], "library_name": "transformers", "license": "llama3.2", "pipeline_tag": "text-generation", "tags": ["GRPO", "Reinforcement learning", "trl", "SFT", "llama-cpp", "gguf-my-repo"]}
task
[ "SUMMARIZATION" ]
46,086
MaziyarPanahi/calme-2.2-llama3.1-70b
MaziyarPanahi
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "chat", "facebook", "llaam3", "finetune", "chatml", "conversational", "en", "dataset:MaziyarPanahi/truthy-dpo-v0.1-axolotl", "base_model:meta-llama/Llama-3.1-70B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-70B-Instruct", "model-index", "autotrain_compatible", "text-generation-inference", "region:us" ]
2024-09-09T12:16:27Z
2024-09-29T11:00:26+00:00
59
2
--- base_model: meta-llama/Meta-Llama-3.1-70B-Instruct datasets: - MaziyarPanahi/truthy-dpo-v0.1-axolotl language: - en library_name: transformers pipeline_tag: text-generation tags: - chat - llama - facebook - llaam3 - finetune - chatml inference: false model_creator: MaziyarPanahi quantized_by: MaziyarPanahi model-index: - name: calme-2.2-llama3.1-70b results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 85.93 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-llama3.1-70b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 54.21 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-llama3.1-70b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 2.11 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-llama3.1-70b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 9.96 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-llama3.1-70b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 17.07 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-llama3.1-70b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 49.05 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-llama3.1-70b name: Open LLM Leaderboard --- <img src="./calme-2.webp" alt="Calme-2 Models" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # MaziyarPanahi/calme-2.2-llama3.1-70b This model is a fine-tuned version of the powerful `meta-llama/Meta-Llama-3.1-70B-Instruct`, pushing the boundaries of natural language understanding and generation even further. My goal was to create a versatile and robust model that excels across a wide range of benchmarks and real-world applications. ## Use Cases This model is suitable for a wide range of applications, including but not limited to: - Advanced question-answering systems - Intelligent chatbots and virtual assistants - Content generation and summarization - Code generation and analysis - Complex problem-solving and decision support # ⚡ Quantized GGUF coming soon! # 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__calme-2.2-llama3.1-70b) | Metric |Value| |-------------------|----:| |Avg. |36.39| |IFEval (0-Shot) |85.93| |BBH (3-Shot) |54.21| |MATH Lvl 5 (4-Shot)| 2.11| |GPQA (0-shot) | 9.96| |MuSR (0-shot) |17.07| |MMLU-PRO (5-shot) |49.05| This model uses `ChatML` prompt template: ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` # How to use ```python # Use a pipeline as a high-level helper from transformers import pipeline messages = [ {"role": "user", "content": "Who are you?"}, ] pipe = pipeline("text-generation", model="MaziyarPanahi/calme-2.2-llama3.1-70b") pipe(messages) # Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/calme-2.2-llama3.1-70b") model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/calme-2.2-llama3.1-70b") ``` # Ethical Considerations As with any large language model, users should be aware of potential biases and limitations. We recommend implementing appropriate safeguards and human oversight when deploying this model in production environments.
null
Non_BioNLP
<img src="./calme-2.webp" alt="Calme-2 Models" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # MaziyarPanahi/calme-2.2-llama3.1-70b This model is a fine-tuned version of the powerful `meta-llama/Meta-Llama-3.1-70B-Instruct`, pushing the boundaries of natural language understanding and generation even further. My goal was to create a versatile and robust model that excels across a wide range of benchmarks and real-world applications. ## Use Cases This model is suitable for a wide range of applications, including but not limited to: - Advanced question-answering systems - Intelligent chatbots and virtual assistants - Content generation and summarization - Code generation and analysis - Complex problem-solving and decision support # ⚡ Quantized GGUF coming soon! # 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__calme-2.2-llama3.1-70b) | Metric |Value| |-------------------|----:| |Avg. |36.39| |IFEval (0-Shot) |85.93| |BBH (3-Shot) |54.21| |MATH Lvl 5 (4-Shot)| 2.11| |GPQA (0-shot) | 9.96| |MuSR (0-shot) |17.07| |MMLU-PRO (5-shot) |49.05| This model uses `ChatML` prompt template: ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` # How to use ```python # Use a pipeline as a high-level helper from transformers import pipeline messages = [ {"role": "user", "content": "Who are you?"}, ] pipe = pipeline("text-generation", model="MaziyarPanahi/calme-2.2-llama3.1-70b") pipe(messages) # Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/calme-2.2-llama3.1-70b") model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/calme-2.2-llama3.1-70b") ``` # Ethical Considerations As with any large language model, users should be aware of potential biases and limitations. We recommend implementing appropriate safeguards and human oversight when deploying this model in production environments.
{"base_model": "meta-llama/Meta-Llama-3.1-70B-Instruct", "datasets": ["MaziyarPanahi/truthy-dpo-v0.1-axolotl"], "language": ["en"], "library_name": "transformers", "pipeline_tag": "text-generation", "tags": ["chat", "llama", "facebook", "llaam3", "finetune", "chatml"], "inference": false, "model_creator": "MaziyarPanahi", "quantized_by": "MaziyarPanahi", "model-index": [{"name": "calme-2.2-llama3.1-70b", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "IFEval (0-Shot)", "type": "HuggingFaceH4/ifeval", "args": {"num_few_shot": 0}}, "metrics": [{"type": "inst_level_strict_acc and prompt_level_strict_acc", "value": 85.93, "name": "strict accuracy"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-llama3.1-70b", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "BBH (3-Shot)", "type": "BBH", "args": {"num_few_shot": 3}}, "metrics": [{"type": "acc_norm", "value": 54.21, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-llama3.1-70b", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MATH Lvl 5 (4-Shot)", "type": "hendrycks/competition_math", "args": {"num_few_shot": 4}}, "metrics": [{"type": "exact_match", "value": 2.11, "name": "exact match"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-llama3.1-70b", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GPQA (0-shot)", "type": "Idavidrein/gpqa", "args": {"num_few_shot": 0}}, "metrics": [{"type": "acc_norm", "value": 9.96, "name": "acc_norm"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-llama3.1-70b", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MuSR (0-shot)", "type": "TAUR-Lab/MuSR", "args": {"num_few_shot": 0}}, "metrics": [{"type": "acc_norm", "value": 17.07, "name": "acc_norm"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-llama3.1-70b", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU-PRO (5-shot)", "type": "TIGER-Lab/MMLU-Pro", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 49.05, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-llama3.1-70b", "name": "Open LLM Leaderboard"}}]}]}
task
[ "SUMMARIZATION" ]
46,087
Davlan/bert-base-multilingual-cased-finetuned-naija
Davlan
fill-mask
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2021-06-15T20:39:28+00:00
116
0
--- {} --- Hugging Face's logo --- language: pcm datasets: --- # bert-base-multilingual-cased-finetuned-naija ## Model description **bert-base-multilingual-cased-finetuned-naija** is a **Nigerian-Pidgin BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Nigerian-Pidgin language texts. It provides **better performance** than the multilingual BERT on named entity recognition datasets. Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Nigerian-Pidgin corpus. ## Intended uses & limitations #### How to use You can use this model with Transformers *pipeline* for masked token prediction. ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-naija') >>> unmasker("Another attack on ambulance happen for Koforidua in March [MASK] year where robbers kill Ambulance driver") ``` #### Limitations and bias This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. ## Training data This model was fine-tuned on JW300 + [BBC Pidgin](https://www.bbc.com/pidgin) ## Training procedure This model was trained on a single NVIDIA V100 GPU ## Eval results on Test set (F-score, average over 5 runs) Dataset| mBERT F1 | pcm_bert F1 -|-|- [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 87.23 | 89.95 ### BibTeX entry and citation info By David Adelani ``` ```
null
Non_BioNLP
Hugging Face's logo --- language: pcm datasets: --- # bert-base-multilingual-cased-finetuned-naija ## Model description **bert-base-multilingual-cased-finetuned-naija** is a **Nigerian-Pidgin BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Nigerian-Pidgin language texts. It provides **better performance** than the multilingual BERT on named entity recognition datasets. Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Nigerian-Pidgin corpus. ## Intended uses & limitations #### How to use You can use this model with Transformers *pipeline* for masked token prediction. ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-naija') >>> unmasker("Another attack on ambulance happen for Koforidua in March [MASK] year where robbers kill Ambulance driver") ``` #### Limitations and bias This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. ## Training data This model was fine-tuned on JW300 + [BBC Pidgin](https://www.bbc.com/pidgin) ## Training procedure This model was trained on a single NVIDIA V100 GPU ## Eval results on Test set (F-score, average over 5 runs) Dataset| mBERT F1 | pcm_bert F1 -|-|- [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 87.23 | 89.95 ### BibTeX entry and citation info By David Adelani ``` ```
{}
task
[ "NAMED_ENTITY_RECOGNITION" ]
46,088
Triangle104/QwQ-LCoT-3B-Instruct-Q5_K_M-GGUF
Triangle104
text-generation
[ "transformers", "gguf", "text-generation-inference", "long-CoT", "safetensors", "3B", "Instruct", "QwQ", "Qwen2.5", "llama-cpp", "gguf-my-repo", "text-generation", "en", "dataset:amphora/QwQ-LongCoT-130K", "base_model:prithivMLmods/QwQ-LCoT-3B-Instruct", "base_model:quantized:prithivMLmods/QwQ-LCoT-3B-Instruct", "license:creativeml-openrail-m", "endpoints_compatible", "region:us", "conversational" ]
2024-12-29T14:04:03Z
2024-12-29T14:04:48+00:00
13
0
--- base_model: prithivMLmods/QwQ-LCoT-3B-Instruct datasets: - amphora/QwQ-LongCoT-130K language: - en library_name: transformers license: creativeml-openrail-m pipeline_tag: text-generation tags: - text-generation-inference - long-CoT - safetensors - 3B - Instruct - QwQ - Qwen2.5 - llama-cpp - gguf-my-repo --- # Triangle104/QwQ-LCoT-3B-Instruct-Q5_K_M-GGUF This model was converted to GGUF format from [`prithivMLmods/QwQ-LCoT-3B-Instruct`](https://huggingface.co/prithivMLmods/QwQ-LCoT-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/prithivMLmods/QwQ-LCoT-3B-Instruct) for more details on the model. --- Model details: - The QwQ-LCoT-3B-Instruct model is a lightweight, instruction-tuned language model designed for complex reasoning and explanation tasks. It is fine-tuned on the Qwen2.5-3B-Instruct base model using the QwQ-LongCoT-130K dataset, focusing on long-chain-of-thought (LCoT) reasoning for enhanced logical comprehension and detailed output generation. Key Features: Long Chain-of-Thought Reasoning: Specifically designed to generate comprehensive, step-by-step explanations for complex queries. Lightweight and Efficient: With only 3 billion parameters, it is optimized for systems with limited computational resources without compromising reasoning capabilities. Instruction Optimization: Fine-tuned to follow prompts and provide concise, actionable, and structured responses. Training Details: Base Model: Qwen2.5-3B-Instruct Dataset: amphora/QwQ-LongCoT-130K Comprising 133,000 annotated samples focusing on logical tasks and structured thinking. Capabilities: Text Generation: Provides detailed, structured, and logical text outputs tailored to user prompts. Reasoning Tasks: Solves step-by-step problems in math, logic, and science. Educational Assistance: Generates coherent explanations for academic and research purposes. Dialogue and Summarization: Handles conversational queries and summarizes long documents effectively. Usage Instructions: Setup: Download all model files and ensure compatibility with the Hugging Face Transformers library. Loading the Model: from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "prithivMLmods/QwQ-LCoT-3B-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) Generate Long-Chain Reasoning Outputs: input_text = "Explain the process of photosynthesis step-by-step." inputs = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**inputs, max_length=300, temperature=0.5) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) Customize Output Generation: Modify the generation_config.json file for different scenarios: temperature: Controls randomness (lower = deterministic, higher = creative). max_length: Sets response length. top_p: Adjusts sampling for diversity in outputs. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/QwQ-LCoT-3B-Instruct-Q5_K_M-GGUF --hf-file qwq-lcot-3b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/QwQ-LCoT-3B-Instruct-Q5_K_M-GGUF --hf-file qwq-lcot-3b-instruct-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/QwQ-LCoT-3B-Instruct-Q5_K_M-GGUF --hf-file qwq-lcot-3b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/QwQ-LCoT-3B-Instruct-Q5_K_M-GGUF --hf-file qwq-lcot-3b-instruct-q5_k_m.gguf -c 2048 ```
null
Non_BioNLP
# Triangle104/QwQ-LCoT-3B-Instruct-Q5_K_M-GGUF This model was converted to GGUF format from [`prithivMLmods/QwQ-LCoT-3B-Instruct`](https://huggingface.co/prithivMLmods/QwQ-LCoT-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/prithivMLmods/QwQ-LCoT-3B-Instruct) for more details on the model. --- Model details: - The QwQ-LCoT-3B-Instruct model is a lightweight, instruction-tuned language model designed for complex reasoning and explanation tasks. It is fine-tuned on the Qwen2.5-3B-Instruct base model using the QwQ-LongCoT-130K dataset, focusing on long-chain-of-thought (LCoT) reasoning for enhanced logical comprehension and detailed output generation. Key Features: Long Chain-of-Thought Reasoning: Specifically designed to generate comprehensive, step-by-step explanations for complex queries. Lightweight and Efficient: With only 3 billion parameters, it is optimized for systems with limited computational resources without compromising reasoning capabilities. Instruction Optimization: Fine-tuned to follow prompts and provide concise, actionable, and structured responses. Training Details: Base Model: Qwen2.5-3B-Instruct Dataset: amphora/QwQ-LongCoT-130K Comprising 133,000 annotated samples focusing on logical tasks and structured thinking. Capabilities: Text Generation: Provides detailed, structured, and logical text outputs tailored to user prompts. Reasoning Tasks: Solves step-by-step problems in math, logic, and science. Educational Assistance: Generates coherent explanations for academic and research purposes. Dialogue and Summarization: Handles conversational queries and summarizes long documents effectively. Usage Instructions: Setup: Download all model files and ensure compatibility with the Hugging Face Transformers library. Loading the Model: from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "prithivMLmods/QwQ-LCoT-3B-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) Generate Long-Chain Reasoning Outputs: input_text = "Explain the process of photosynthesis step-by-step." inputs = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**inputs, max_length=300, temperature=0.5) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) Customize Output Generation: Modify the generation_config.json file for different scenarios: temperature: Controls randomness (lower = deterministic, higher = creative). max_length: Sets response length. top_p: Adjusts sampling for diversity in outputs. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/QwQ-LCoT-3B-Instruct-Q5_K_M-GGUF --hf-file qwq-lcot-3b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/QwQ-LCoT-3B-Instruct-Q5_K_M-GGUF --hf-file qwq-lcot-3b-instruct-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/QwQ-LCoT-3B-Instruct-Q5_K_M-GGUF --hf-file qwq-lcot-3b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/QwQ-LCoT-3B-Instruct-Q5_K_M-GGUF --hf-file qwq-lcot-3b-instruct-q5_k_m.gguf -c 2048 ```
{"base_model": "prithivMLmods/QwQ-LCoT-3B-Instruct", "datasets": ["amphora/QwQ-LongCoT-130K"], "language": ["en"], "library_name": "transformers", "license": "creativeml-openrail-m", "pipeline_tag": "text-generation", "tags": ["text-generation-inference", "long-CoT", "safetensors", "3B", "Instruct", "QwQ", "Qwen2.5", "llama-cpp", "gguf-my-repo"]}
task
[ "SUMMARIZATION" ]
46,089
math-similarity/Bert-MLM_arXiv-MP-class_arXiv
math-similarity
sentence-similarity
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-05-18T15:13:58Z
2024-06-06T16:26:05+00:00
17
0
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # Bert-MLM_arXiv-MP-class_arXiv This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. The model is specifically designed to compute similarities of short mathematical texts. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["In this paper we show how to compute the $\\Lambda_{\\alpha}$ norm, $\\alpha\\ge 0$, using the dyadic grid. This result is a consequence of the description of the Hardy spaces $H^p(R^N)$ in terms of dyadic and special atoms.", "We show that a determinant of Stirling cycle numbers counts unlabeled acyclic single-source automata. The proof involves a bijection from these automata to certain marked lattice paths and a sign-reversing involution to evaluate the determinant."] model = SentenceTransformer('math-similarity/Bert-MLM_arXiv-MP-class_arXiv') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ["In this paper we show how to compute the $\\Lambda_{\\alpha}$ norm, $\\alpha\\ge 0$, using the dyadic grid. This result is a consequence of the description of the Hardy spaces $H^p(R^N)$ in terms of dyadic and special atoms.", "We show that a determinant of Stirling cycle numbers counts unlabeled acyclic single-source automata. The proof involves a bijection from these automata to certain marked lattice paths and a sign-reversing involution to evaluate the determinant."] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('math-similarity/Bert-MLM_arXiv-MP-class_arXiv') model = AutoModel.from_pretrained('math-similarity/Bert-MLM_arXiv-MP-class_arXiv') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` --------- ## Background ## Intended uses Our model is intended to be used as a sentence and short paragraph encoder for mathematical texts. Given an input text, it outputs a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 256 word pieces is truncated. ## Training procedure ### Domain-adaption We use the domain-adapted [math-similarity/Bert-MLM_arXiv](https://huggingface.co/math-similarity/Bert-MLM_arXiv) model. Please refer to the model card for more detailed information about the domain-adaption procedure. ### Pooling We add a mean-pooling layer on top of the domain-adapted model. ### Fine-tuning We fine-tune the model using a cosine-similarity objective. Formally, it computes the vectors `u = model(sentence_A)` and `v = model(sentence_B)` and measures the cosine-similarity between the two. By default, it minimizes the following loss: `||input_label - cos_score_transformation(cosine_sim(u,v))||_2`, with MSE as loss function. We use pairs of concenated title+abstract texts from [arXiv](https://arxiv.org) as fine-tuning dataset and model semantic similarity with their classificattion codes. Two texts are defined as similar, if they share their classification code. Otherwise, they are defined as semantically dissimilar. We only use texts with a singular classification code in this dataset. The training set contains 43.572 text pairs and the evaluation set contains 5.447 pairs. See the [fine-tuning notebook](https://github.com/math-collab/text-similarity/blob/main/Bert-MLM%20%2B%20mean%20pooling%20%2B%20fine-tune%20arXiv-class.ipynb) for more information. We do not include the fine-tuning dataset directly. It can be recreated using its [generation notebook](https://github.com/math-collab/text-similarity/blob/main/%5Bclass%5D%5Barxiv%5D%20data%20set%20generation.ipynb) and [datasets/math-similarity/arXiv-metadata-oai-snapshot-111](https://huggingface.co/datasets/math-similarity/arXiv-metadata-oai-snapshot-111). ## Citing & Authors This model is an additional resource for the [CICM'24](https://cicm-conference.org/2024/cicm.php) submission *On modelling similarity of short mathematical texts*.
null
Non_BioNLP
# Bert-MLM_arXiv-MP-class_arXiv This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. The model is specifically designed to compute similarities of short mathematical texts. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["In this paper we show how to compute the $\\Lambda_{\\alpha}$ norm, $\\alpha\\ge 0$, using the dyadic grid. This result is a consequence of the description of the Hardy spaces $H^p(R^N)$ in terms of dyadic and special atoms.", "We show that a determinant of Stirling cycle numbers counts unlabeled acyclic single-source automata. The proof involves a bijection from these automata to certain marked lattice paths and a sign-reversing involution to evaluate the determinant."] model = SentenceTransformer('math-similarity/Bert-MLM_arXiv-MP-class_arXiv') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ["In this paper we show how to compute the $\\Lambda_{\\alpha}$ norm, $\\alpha\\ge 0$, using the dyadic grid. This result is a consequence of the description of the Hardy spaces $H^p(R^N)$ in terms of dyadic and special atoms.", "We show that a determinant of Stirling cycle numbers counts unlabeled acyclic single-source automata. The proof involves a bijection from these automata to certain marked lattice paths and a sign-reversing involution to evaluate the determinant."] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('math-similarity/Bert-MLM_arXiv-MP-class_arXiv') model = AutoModel.from_pretrained('math-similarity/Bert-MLM_arXiv-MP-class_arXiv') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` --------- ## Background ## Intended uses Our model is intended to be used as a sentence and short paragraph encoder for mathematical texts. Given an input text, it outputs a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 256 word pieces is truncated. ## Training procedure ### Domain-adaption We use the domain-adapted [math-similarity/Bert-MLM_arXiv](https://huggingface.co/math-similarity/Bert-MLM_arXiv) model. Please refer to the model card for more detailed information about the domain-adaption procedure. ### Pooling We add a mean-pooling layer on top of the domain-adapted model. ### Fine-tuning We fine-tune the model using a cosine-similarity objective. Formally, it computes the vectors `u = model(sentence_A)` and `v = model(sentence_B)` and measures the cosine-similarity between the two. By default, it minimizes the following loss: `||input_label - cos_score_transformation(cosine_sim(u,v))||_2`, with MSE as loss function. We use pairs of concenated title+abstract texts from [arXiv](https://arxiv.org) as fine-tuning dataset and model semantic similarity with their classificattion codes. Two texts are defined as similar, if they share their classification code. Otherwise, they are defined as semantically dissimilar. We only use texts with a singular classification code in this dataset. The training set contains 43.572 text pairs and the evaluation set contains 5.447 pairs. See the [fine-tuning notebook](https://github.com/math-collab/text-similarity/blob/main/Bert-MLM%20%2B%20mean%20pooling%20%2B%20fine-tune%20arXiv-class.ipynb) for more information. We do not include the fine-tuning dataset directly. It can be recreated using its [generation notebook](https://github.com/math-collab/text-similarity/blob/main/%5Bclass%5D%5Barxiv%5D%20data%20set%20generation.ipynb) and [datasets/math-similarity/arXiv-metadata-oai-snapshot-111](https://huggingface.co/datasets/math-similarity/arXiv-metadata-oai-snapshot-111). ## Citing & Authors This model is an additional resource for the [CICM'24](https://cicm-conference.org/2024/cicm.php) submission *On modelling similarity of short mathematical texts*.
{"pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"]}
task
[ "SEMANTIC_SIMILARITY" ]
46,091
RichardErkhov/jsonfin17_-_autotrain-financial-convo-summary-89094143854-8bits
RichardErkhov
text-generation
[ "transformers", "safetensors", "bart", "text-generation", "autotrain_compatible", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
2024-05-09T21:14:17Z
2024-05-09T21:14:56+00:00
4
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) autotrain-financial-convo-summary-89094143854 - bnb 8bits - Model creator: https://huggingface.co/jsonfin17/ - Original model: https://huggingface.co/jsonfin17/autotrain-financial-convo-summary-89094143854/ Original model description: --- tags: - autotrain - summarization language: - unk widget: - text: "I love AutoTrain" datasets: - jsonfin17/autotrain-data-financial-convo-summary co2_eq_emissions: emissions: 0.6036233338330799 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 89094143854 - CO2 Emissions (in grams): 0.6036 ## Validation Metrics - Loss: 2.399 - Rouge1: 32.368 - Rouge2: 4.298 - RougeL: 20.788 - RougeLsum: 28.288 - Gen Len: 71.000 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/jsonfin17/autotrain-financial-convo-summary-89094143854 ```
null
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) autotrain-financial-convo-summary-89094143854 - bnb 8bits - Model creator: https://huggingface.co/jsonfin17/ - Original model: https://huggingface.co/jsonfin17/autotrain-financial-convo-summary-89094143854/ Original model description: --- tags: - autotrain - summarization language: - unk widget: - text: "I love AutoTrain" datasets: - jsonfin17/autotrain-data-financial-convo-summary co2_eq_emissions: emissions: 0.6036233338330799 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 89094143854 - CO2 Emissions (in grams): 0.6036 ## Validation Metrics - Loss: 2.399 - Rouge1: 32.368 - Rouge2: 4.298 - RougeL: 20.788 - RougeLsum: 28.288 - Gen Len: 71.000 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/jsonfin17/autotrain-financial-convo-summary-89094143854 ```
{}
task
[ "SUMMARIZATION" ]
46,092
fine-tuned/BAAI_bge-large-en-v1_5-1562024-to89-webapp
fine-tuned
feature-extraction
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "Medicine", "Biology", "Technology", "Research", "Science", "en", "dataset:fine-tuned/BAAI_bge-large-en-v1_5-1562024-to89-webapp", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-06-15T12:15:06Z
2024-06-15T12:15:46+00:00
9
0
--- datasets: - fine-tuned/BAAI_bge-large-en-v1_5-1562024-to89-webapp - allenai/c4 language: - en license: apache-2.0 pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb - Medicine - Biology - Technology - Research - Science --- This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: scientific research in medicine, biology, and technology ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/BAAI_bge-large-en-v1_5-1562024-to89-webapp', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
null
Non_BioNLP
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: scientific research in medicine, biology, and technology ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/BAAI_bge-large-en-v1_5-1562024-to89-webapp', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
{"datasets": ["fine-tuned/BAAI_bge-large-en-v1_5-1562024-to89-webapp", "allenai/c4"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "feature-extraction", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb", "Medicine", "Biology", "Technology", "Research", "Science"]}
task
[ "TEXT_CLASSIFICATION" ]
46,093
nlpconnect/deberta-v3-xsmall-squad2
nlpconnect
question-answering
[ "transformers", "pytorch", "deberta-v2", "question-answering", "dataset:squad_v2", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
2022-08-28T10:24:54Z
2022-12-06T03:37:01+00:00
134
0
--- datasets: - squad_v2 license: apache-2.0 model-index: - name: nlpconnect/deberta-v3-xsmall-squad2 results: - task: type: question-answering name: Question Answering dataset: name: squad_v2 type: squad_v2 config: squad_v2 split: validation metrics: - type: exact_match value: 79.3917 name: Exact Match verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTFiMWI5YzFlMDZhMzc2NDIwYjNiZmIyMThmOWQxYjFjZmM2ZDQ0OGM2NmNlNmI3Y2U2N2JjMmVkZTgyZjNiOCIsInZlcnNpb24iOjF9.MCw9UJ3MI3Lf5hvOgk7Lw2xZfN4678p7ebG3vnGXX_Avw6fELTPwxZ9qGA-9tL00p4NxaSb3Cx6XAFvWetAIBA - type: f1 value: 82.6738 name: F1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjdiYWY2MzU4YjZhMWQzZGJhZTk3NzU3Y2UwYmQ4MzliZmQxOGUxZDllN2Y0ZmZhYjVlNTE0MzY1MjU5OWMwMCIsInZlcnNpb24iOjF9.zeWLwXy77n0YKxGA5gjySe8p-_nPQxbiPnvQU2tF45IyMmlYKUuLeq4hJnNe-5NgriTf8xkBJBE7Cr5lWHy_Cw - task: type: question-answering name: Question Answering dataset: name: squad type: squad config: plain_text split: validation metrics: - type: exact_match value: 84.9246 name: Exact Match verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGJhYmU0Y2I4Y2UyOGVlOTlkMmQ2OTcyMTZkNTkwNTMzNzhmNzZiYjU4ZDkxMGM5NzAyMjk1M2ExNGIzOWU4NCIsInZlcnNpb24iOjF9.ql1rCId6lQ7Uwq2spG3q2fFppkFGHA1IWQjvyPRhvKdRNzApBO0mu9JjMAv4uNKZX-kmGEkI018_9tAzN7kwDw - type: f1 value: 91.6201 name: F1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjBjMmI0OTFmODVjMzllZDM0NTdmNjU4NGI4NzA4NTJhOWVkMDQ5OTY0MDcyMWEwZTFkODNlY2VhZjU2NWJmZSIsInZlcnNpb24iOjF9.rGvF60bfWIXzB66C7fkdxCtZvRZ_m3onbLaNbs7M4M0Fk27xnMat6IAy1DeTztkOKLoiD2s2NQH6wXid83cgCw --- # Deberta-v3-xsmall-squad2 ## What is SQuAD? Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. SQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. ## Inference ```python from transformers import pipeline qa = pipeline("question-answering", model="nlpconnect/deberta-v3-xsmall-squad2") result = qa(context="My name is Sarah and I live in London", question="Where do I live?") ``` ## Accuracy ```json squad_v2 = {'exact': 79.392, 'f1': 82.674} squad = {'exact': 84.925, 'f1': 91.620} ```
null
Non_BioNLP
# Deberta-v3-xsmall-squad2 ## What is SQuAD? Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. SQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. ## Inference ```python from transformers import pipeline qa = pipeline("question-answering", model="nlpconnect/deberta-v3-xsmall-squad2") result = qa(context="My name is Sarah and I live in London", question="Where do I live?") ``` ## Accuracy ```json squad_v2 = {'exact': 79.392, 'f1': 82.674} squad = {'exact': 84.925, 'f1': 91.620} ```
{"datasets": ["squad_v2"], "license": "apache-2.0", "model-index": [{"name": "nlpconnect/deberta-v3-xsmall-squad2", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "config": "squad_v2", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 79.3917, "name": "Exact Match", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTFiMWI5YzFlMDZhMzc2NDIwYjNiZmIyMThmOWQxYjFjZmM2ZDQ0OGM2NmNlNmI3Y2U2N2JjMmVkZTgyZjNiOCIsInZlcnNpb24iOjF9.MCw9UJ3MI3Lf5hvOgk7Lw2xZfN4678p7ebG3vnGXX_Avw6fELTPwxZ9qGA-9tL00p4NxaSb3Cx6XAFvWetAIBA"}, {"type": "f1", "value": 82.6738, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjdiYWY2MzU4YjZhMWQzZGJhZTk3NzU3Y2UwYmQ4MzliZmQxOGUxZDllN2Y0ZmZhYjVlNTE0MzY1MjU5OWMwMCIsInZlcnNpb24iOjF9.zeWLwXy77n0YKxGA5gjySe8p-_nPQxbiPnvQU2tF45IyMmlYKUuLeq4hJnNe-5NgriTf8xkBJBE7Cr5lWHy_Cw"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad", "type": "squad", "config": "plain_text", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 84.9246, "name": "Exact Match", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGJhYmU0Y2I4Y2UyOGVlOTlkMmQ2OTcyMTZkNTkwNTMzNzhmNzZiYjU4ZDkxMGM5NzAyMjk1M2ExNGIzOWU4NCIsInZlcnNpb24iOjF9.ql1rCId6lQ7Uwq2spG3q2fFppkFGHA1IWQjvyPRhvKdRNzApBO0mu9JjMAv4uNKZX-kmGEkI018_9tAzN7kwDw"}, {"type": "f1", "value": 91.6201, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjBjMmI0OTFmODVjMzllZDM0NTdmNjU4NGI4NzA4NTJhOWVkMDQ5OTY0MDcyMWEwZTFkODNlY2VhZjU2NWJmZSIsInZlcnNpb24iOjF9.rGvF60bfWIXzB66C7fkdxCtZvRZ_m3onbLaNbs7M4M0Fk27xnMat6IAy1DeTztkOKLoiD2s2NQH6wXid83cgCw"}]}]}]}
task
[ "QUESTION_ANSWERING" ]
46,094
mrapacz/interlinear-pl-mt5-base-emb-auto-diacritics-bh
mrapacz
text2text-generation
[ "transformers", "pytorch", "morph-t5-auto", "text2text-generation", "pl", "dataset:mrapacz/greek-interlinear-translations", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2025-02-08T12:28:04Z
2025-02-21T21:31:05+00:00
17
0
--- base_model: - mT5-base datasets: - mrapacz/greek-interlinear-translations language: - pl library_name: transformers license: cc-by-sa-4.0 metrics: - bleu --- # Model Card for Ancient Greek to Polish Interlinear Translation Model This model performs interlinear translation from Ancient Greek to Polish, maintaining word-level alignment between source and target texts. You can find the source code used for training this and other models trained as part of this project in the [GitHub repository](https://github.com/mrapacz/loreslm-interlinear-translation). ## Model Details ### Model Description - **Developed By:** Maciej Rapacz, AGH University of Kraków - **Model Type:** MorphT5AutoForConditionalGeneration - **Base Model:** mT5-base - **Tokenizer:** mT5 - **Language(s):** Ancient Greek (source) → Polish (target) - **License:** CC BY-NC-SA 4.0 - **Tag Set:** BH (Bible Hub) - **Text Preprocessing:** Diacritics - **Morphological Encoding:** emb-auto ### Model Performance - **BLEU Score:** 54.63 - **SemScore:** 0.92 ### Model Sources - **Repository:** https://github.com/mrapacz/loreslm-interlinear-translation - **Paper:** https://aclanthology.org/2025.loreslm-1.11/ ## Usage Example > **Note**: This model uses a modification of T5-family models that includes dedicated embedding layers for encoding morphological information. To load these models, install the [morpht5](https://github.com/mrapacz/loreslm-interlinear-translation/blob/master/morpht5/README.md) package: > ```bash > pip install morpht5 > ``` ```python >>> from morpht5 import MorphT5AutoForConditionalGeneration, MorphT5Tokenizer >>> text = ['Λέγει', 'αὐτῷ', 'ὁ', 'Ἰησοῦς', 'Ἔγειρε', 'ἆρον', 'τὸν', 'κράβαττόν', 'σου', 'καὶ', 'περιπάτει'] >>> tags = ['V-PIA-3S', 'PPro-DM3S', 'Art-NMS', 'N-NMS', 'V-PMA-2S', 'V-AMA-2S', 'Art-AMS', 'N-AMS', 'PPro-G2S', 'Conj', 'V-PMA-2S'] >>> tokenizer = MorphT5Tokenizer.from_pretrained("mrapacz/interlinear-pl-mt5-base-emb-auto-diacritics-bh") >>> inputs = tokenizer( text=text, morph_tags=tags, return_tensors="pt" ) >>> model = MorphT5AutoForConditionalGeneration.from_pretrained("mrapacz/interlinear-pl-mt5-base-emb-auto-diacritics-bh") >>> outputs = model.generate( **inputs, max_new_tokens=100, early_stopping=True, ) >>> decoded = tokenizer.decode(outputs[0], skip_special_tokens=True, keep_block_separator=True) >>> decoded = decoded.replace(tokenizer.target_block_separator_token, " | ") >>> decoded 'Mówi | mu | - | Jezus | wstawaj | weź | - | matę | swoją | i | chodź' ``` ## Citation If you use this model, please cite the following paper: ``` @inproceedings{rapacz-smywinski-pohl-2025-low, title = "Low-Resource Interlinear Translation: Morphology-Enhanced Neural Models for {A}ncient {G}reek", author = "Rapacz, Maciej and Smywi{\'n}ski-Pohl, Aleksander", editor = "Hettiarachchi, Hansi and Ranasinghe, Tharindu and Rayson, Paul and Mitkov, Ruslan and Gaber, Mohamed and Premasiri, Damith and Tan, Fiona Anting and Uyangodage, Lasitha", booktitle = "Proceedings of the First Workshop on Language Models for Low-Resource Languages", month = jan, year = "2025", address = "Abu Dhabi, United Arab Emirates", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2025.loreslm-1.11/", pages = "145--165", abstract = "Contemporary machine translation systems prioritize fluent, natural-sounding output with flexible word ordering. In contrast, interlinear translation maintains the source text`s syntactic structure by aligning target language words directly beneath their source counterparts. Despite its importance in classical scholarship, automated approaches to interlinear translation remain understudied. We evaluated neural interlinear translation from Ancient Greek to English and Polish using four transformer-based models: two Ancient Greek-specialized (GreTa and PhilTa) and two general-purpose multilingual models (mT5-base and mT5-large). Our approach introduces novel morphological embedding layers and evaluates text preprocessing and tag set selection across 144 experimental configurations using a word-aligned parallel corpus of the Greek New Testament. Results show that morphological features through dedicated embedding layers significantly enhance translation quality, improving BLEU scores by 35{\%} (44.67 {\textrightarrow} 60.40) for English and 38{\%} (42.92 {\textrightarrow} 59.33) for Polish compared to baseline models. PhilTa achieves state-of-the-art performance for English, while mT5-large does so for Polish. Notably, PhilTa maintains stable performance using only 10{\%} of training data. Our findings challenge the assumption that modern neural architectures cannot benefit from explicit morphological annotations. While preprocessing strategies and tag set selection show minimal impact, the substantial gains from morphological embeddings demonstrate their value in low-resource scenarios." } ```
null
Non_BioNLP
# Model Card for Ancient Greek to Polish Interlinear Translation Model This model performs interlinear translation from Ancient Greek to Polish, maintaining word-level alignment between source and target texts. You can find the source code used for training this and other models trained as part of this project in the [GitHub repository](https://github.com/mrapacz/loreslm-interlinear-translation). ## Model Details ### Model Description - **Developed By:** Maciej Rapacz, AGH University of Kraków - **Model Type:** MorphT5AutoForConditionalGeneration - **Base Model:** mT5-base - **Tokenizer:** mT5 - **Language(s):** Ancient Greek (source) → Polish (target) - **License:** CC BY-NC-SA 4.0 - **Tag Set:** BH (Bible Hub) - **Text Preprocessing:** Diacritics - **Morphological Encoding:** emb-auto ### Model Performance - **BLEU Score:** 54.63 - **SemScore:** 0.92 ### Model Sources - **Repository:** https://github.com/mrapacz/loreslm-interlinear-translation - **Paper:** https://aclanthology.org/2025.loreslm-1.11/ ## Usage Example > **Note**: This model uses a modification of T5-family models that includes dedicated embedding layers for encoding morphological information. To load these models, install the [morpht5](https://github.com/mrapacz/loreslm-interlinear-translation/blob/master/morpht5/README.md) package: > ```bash > pip install morpht5 > ``` ```python >>> from morpht5 import MorphT5AutoForConditionalGeneration, MorphT5Tokenizer >>> text = ['Λέγει', 'αὐτῷ', 'ὁ', 'Ἰησοῦς', 'Ἔγειρε', 'ἆρον', 'τὸν', 'κράβαττόν', 'σου', 'καὶ', 'περιπάτει'] >>> tags = ['V-PIA-3S', 'PPro-DM3S', 'Art-NMS', 'N-NMS', 'V-PMA-2S', 'V-AMA-2S', 'Art-AMS', 'N-AMS', 'PPro-G2S', 'Conj', 'V-PMA-2S'] >>> tokenizer = MorphT5Tokenizer.from_pretrained("mrapacz/interlinear-pl-mt5-base-emb-auto-diacritics-bh") >>> inputs = tokenizer( text=text, morph_tags=tags, return_tensors="pt" ) >>> model = MorphT5AutoForConditionalGeneration.from_pretrained("mrapacz/interlinear-pl-mt5-base-emb-auto-diacritics-bh") >>> outputs = model.generate( **inputs, max_new_tokens=100, early_stopping=True, ) >>> decoded = tokenizer.decode(outputs[0], skip_special_tokens=True, keep_block_separator=True) >>> decoded = decoded.replace(tokenizer.target_block_separator_token, " | ") >>> decoded 'Mówi | mu | - | Jezus | wstawaj | weź | - | matę | swoją | i | chodź' ``` ## Citation If you use this model, please cite the following paper: ``` @inproceedings{rapacz-smywinski-pohl-2025-low, title = "Low-Resource Interlinear Translation: Morphology-Enhanced Neural Models for {A}ncient {G}reek", author = "Rapacz, Maciej and Smywi{\'n}ski-Pohl, Aleksander", editor = "Hettiarachchi, Hansi and Ranasinghe, Tharindu and Rayson, Paul and Mitkov, Ruslan and Gaber, Mohamed and Premasiri, Damith and Tan, Fiona Anting and Uyangodage, Lasitha", booktitle = "Proceedings of the First Workshop on Language Models for Low-Resource Languages", month = jan, year = "2025", address = "Abu Dhabi, United Arab Emirates", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2025.loreslm-1.11/", pages = "145--165", abstract = "Contemporary machine translation systems prioritize fluent, natural-sounding output with flexible word ordering. In contrast, interlinear translation maintains the source text`s syntactic structure by aligning target language words directly beneath their source counterparts. Despite its importance in classical scholarship, automated approaches to interlinear translation remain understudied. We evaluated neural interlinear translation from Ancient Greek to English and Polish using four transformer-based models: two Ancient Greek-specialized (GreTa and PhilTa) and two general-purpose multilingual models (mT5-base and mT5-large). Our approach introduces novel morphological embedding layers and evaluates text preprocessing and tag set selection across 144 experimental configurations using a word-aligned parallel corpus of the Greek New Testament. Results show that morphological features through dedicated embedding layers significantly enhance translation quality, improving BLEU scores by 35{\%} (44.67 {\textrightarrow} 60.40) for English and 38{\%} (42.92 {\textrightarrow} 59.33) for Polish compared to baseline models. PhilTa achieves state-of-the-art performance for English, while mT5-large does so for Polish. Notably, PhilTa maintains stable performance using only 10{\%} of training data. Our findings challenge the assumption that modern neural architectures cannot benefit from explicit morphological annotations. While preprocessing strategies and tag set selection show minimal impact, the substantial gains from morphological embeddings demonstrate their value in low-resource scenarios." } ```
{"base_model": ["mT5-base"], "datasets": ["mrapacz/greek-interlinear-translations"], "language": ["pl"], "library_name": "transformers", "license": "cc-by-sa-4.0", "metrics": ["bleu"]}
task
[ "TRANSLATION" ]
46,095
csocsci/mt5-base-multi-label-all-cs-iv
csocsci
text2text-generation
[ "transformers", "pytorch", "mt5", "text2text-generation", "multilingual", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-09-22T13:25:13Z
2023-09-24T05:14:00+00:00
8
0
--- language: - multilingual license: mit --- # Model Card for mt5-base-multi-label-all-cs-iv <!-- Provide a quick summary of what the model is/does. --> This model is fine-tuned for multi-label seq2seq text classification of Supportive Interactions in Instant Messenger dialogs of Adolescents. ## Model Description The model was fine-tuned on a dataset of Czech Instant Messenger dialogs of Adolescents. The classification is multi-label. For each of the utterances in the input, the model outputs any combination of the tags:'NO TAG', 'Informační podpora', 'Emocionální podpora', 'Začlenění do skupiny', 'Uznání', 'Nabídka pomoci': as a string joined with ', ' (ordered alphabetically). Each label indicates the presence of that category of Supportive Interactions: 'no tag', 'informational support', 'emocional support', 'social companionship', 'appraisal', 'instrumental support' in each of the utterances of the input. The inputs of the model is a sequence of utterances joined with ';'. The outputs are a sequence of per-utterance labels such as: 'NO TAG; Informační podpora, Uznání; NO TAG' - **Developed by:** Anonymous - **Language(s):** multilingual - **Finetuned from:** mt5-base ## Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/chi2024submission - **Paper:** Stay tuned! ## Usage Here is how to use this model to classify a context-window of a dialogue: ```python import itertools from transformers import AutoModelForSeq2SeqLM, AutoTokenizer import torch # Target dialog context window test_texts = ['Utterance1;Utterance2;Utterance3'] # Load the model and tokenizer checkpoint_path = "chi2024/mt5-base-multi-label-all-cs-iv" model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint_path)\ .to("cuda" if torch.cuda.is_available() else "cpu") tokenizer = AutoTokenizer.from_pretrained(checkpoint_path) # Define helper functions def predict_one(text): inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=256).to(model.device) outputs = model.generate(**inputs) decoded = [text.split(",")[0].strip() for text in tokenizer.batch_decode(outputs, skip_special_tokens=True)] predicted_sequence = list( itertools.chain(*(pred_one.split("; ") for pred_one in decoded))) return predicted_sequence # Run the prediction dec = predict_one(test_texts[0]) print(dec) ```
null
Non_BioNLP
# Model Card for mt5-base-multi-label-all-cs-iv <!-- Provide a quick summary of what the model is/does. --> This model is fine-tuned for multi-label seq2seq text classification of Supportive Interactions in Instant Messenger dialogs of Adolescents. ## Model Description The model was fine-tuned on a dataset of Czech Instant Messenger dialogs of Adolescents. The classification is multi-label. For each of the utterances in the input, the model outputs any combination of the tags:'NO TAG', 'Informační podpora', 'Emocionální podpora', 'Začlenění do skupiny', 'Uznání', 'Nabídka pomoci': as a string joined with ', ' (ordered alphabetically). Each label indicates the presence of that category of Supportive Interactions: 'no tag', 'informational support', 'emocional support', 'social companionship', 'appraisal', 'instrumental support' in each of the utterances of the input. The inputs of the model is a sequence of utterances joined with ';'. The outputs are a sequence of per-utterance labels such as: 'NO TAG; Informační podpora, Uznání; NO TAG' - **Developed by:** Anonymous - **Language(s):** multilingual - **Finetuned from:** mt5-base ## Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/chi2024submission - **Paper:** Stay tuned! ## Usage Here is how to use this model to classify a context-window of a dialogue: ```python import itertools from transformers import AutoModelForSeq2SeqLM, AutoTokenizer import torch # Target dialog context window test_texts = ['Utterance1;Utterance2;Utterance3'] # Load the model and tokenizer checkpoint_path = "chi2024/mt5-base-multi-label-all-cs-iv" model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint_path)\ .to("cuda" if torch.cuda.is_available() else "cpu") tokenizer = AutoTokenizer.from_pretrained(checkpoint_path) # Define helper functions def predict_one(text): inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=256).to(model.device) outputs = model.generate(**inputs) decoded = [text.split(",")[0].strip() for text in tokenizer.batch_decode(outputs, skip_special_tokens=True)] predicted_sequence = list( itertools.chain(*(pred_one.split("; ") for pred_one in decoded))) return predicted_sequence # Run the prediction dec = predict_one(test_texts[0]) print(dec) ```
{"language": ["multilingual"], "license": "mit"}
task
[ "TEXT_CLASSIFICATION" ]
46,096
pitehu/T5_NER_CONLL_LIST
pitehu
text2text-generation
[ "transformers", "pytorch", "t5", "text2text-generation", "Named Entity Recognition", "en", "dataset:wmt19", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05Z
2022-01-20T14:32:20+00:00
117
0
--- datasets: - wmt19 language: - en license: apache-2.0 metrics: - bleu - sacrebleu tags: - Named Entity Recognition inference: parameters: max_length: 1024 ---
null
Non_BioNLP
{"datasets": ["wmt19"], "language": ["en"], "license": "apache-2.0", "metrics": ["bleu", "sacrebleu"], "tags": ["Named Entity Recognition"], "inference": {"parameters": {"max_length": 1024}}}
task
[ "NAMED_ENTITY_RECOGNITION" ]
46,097
Cinezys/calltagger_v2_cpu
Cinezys
text-classification
[ "setfit", "safetensors", "bert", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "region:us" ]
2024-07-12T17:52:26Z
2024-07-12T18:01:57+00:00
4
0
--- library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: [] inference: true --- # SetFit This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit <!-- - **Sentence Transformer:** [Unknown](https://huggingface.co/unknown) --> - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 9 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("setfit_model_id") # Run inference preds = model("I loved the spiderman movie!") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 3.0.1 - Transformers: 4.39.0 - PyTorch: 2.3.0+cu121 - Datasets: 2.20.0 - Tokenizers: 0.15.2 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SetFit This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit <!-- - **Sentence Transformer:** [Unknown](https://huggingface.co/unknown) --> - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 9 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("setfit_model_id") # Run inference preds = model("I loved the spiderman movie!") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 3.0.1 - Transformers: 4.39.0 - PyTorch: 2.3.0+cu121 - Datasets: 2.20.0 - Tokenizers: 0.15.2 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [], "inference": true}
task
[ "TEXT_CLASSIFICATION" ]
46,098
fmartinmonier/distilbert-base-uncased-finetuned-cola
fmartinmonier
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-10-06T10:08:28Z
2022-10-06T10:52:34+00:00
9
0
--- datasets: - glue license: apache-2.0 metrics: - matthews_correlation tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: type: text-classification name: Text Classification dataset: name: glue type: glue args: cola metrics: - type: matthews_correlation value: 0.5477951635989807 name: Matthews Correlation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8133 - Matthews Correlation: 0.5478 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5259 | 1.0 | 535 | 0.5401 | 0.4009 | | 0.3513 | 2.0 | 1070 | 0.5403 | 0.4876 | | 0.2373 | 3.0 | 1605 | 0.5422 | 0.5384 | | 0.1795 | 4.0 | 2140 | 0.7586 | 0.5309 | | 0.1282 | 5.0 | 2675 | 0.8133 | 0.5478 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8133 - Matthews Correlation: 0.5478 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5259 | 1.0 | 535 | 0.5401 | 0.4009 | | 0.3513 | 2.0 | 1070 | 0.5403 | 0.4876 | | 0.2373 | 3.0 | 1605 | 0.5422 | 0.5384 | | 0.1795 | 4.0 | 2140 | 0.7586 | 0.5309 | | 0.1282 | 5.0 | 2675 | 0.8133 | 0.5478 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
{"datasets": ["glue"], "license": "apache-2.0", "metrics": ["matthews_correlation"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5477951635989807, "name": "Matthews Correlation"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,099
muhtasham/small-mlm-tweet-target-imdb
muhtasham
text-classification
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-12-11T04:57:53Z
2022-12-11T05:07:45+00:00
113
0
--- datasets: - imdb license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: small-mlm-tweet-target-imdb results: - task: type: text-classification name: Text Classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - type: accuracy value: 0.88784 name: Accuracy - type: f1 value: 0.9405881854394441 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small-mlm-tweet-target-imdb This model is a fine-tuned version of [muhtasham/small-mlm-tweet](https://huggingface.co/muhtasham/small-mlm-tweet) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.4422 - Accuracy: 0.8878 - F1: 0.9406 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.3515 | 0.64 | 500 | 0.1494 | 0.9388 | 0.9684 | | 0.2452 | 1.28 | 1000 | 0.1439 | 0.9450 | 0.9717 | | 0.1956 | 1.92 | 1500 | 0.2199 | 0.9156 | 0.9559 | | 0.1398 | 2.56 | 2000 | 0.4328 | 0.876 | 0.9339 | | 0.1102 | 3.2 | 2500 | 0.4422 | 0.8878 | 0.9406 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1 - Datasets 2.7.1 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small-mlm-tweet-target-imdb This model is a fine-tuned version of [muhtasham/small-mlm-tweet](https://huggingface.co/muhtasham/small-mlm-tweet) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.4422 - Accuracy: 0.8878 - F1: 0.9406 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.3515 | 0.64 | 500 | 0.1494 | 0.9388 | 0.9684 | | 0.2452 | 1.28 | 1000 | 0.1439 | 0.9450 | 0.9717 | | 0.1956 | 1.92 | 1500 | 0.2199 | 0.9156 | 0.9559 | | 0.1398 | 2.56 | 2000 | 0.4328 | 0.876 | 0.9339 | | 0.1102 | 3.2 | 2500 | 0.4422 | 0.8878 | 0.9406 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1 - Datasets 2.7.1 - Tokenizers 0.13.2
{"datasets": ["imdb"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "small-mlm-tweet-target-imdb", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "config": "plain_text", "split": "train", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 0.88784, "name": "Accuracy"}, {"type": "f1", "value": 0.9405881854394441, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,100