id
stringlengths 6
113
| author
stringlengths 2
36
| task_category
stringclasses 42
values | tags
listlengths 1
4.05k
| created_time
timestamp[ns, tz=UTC]date 2022-03-02 23:29:04
2025-04-10 08:38:38
| last_modified
stringdate 2020-05-14 13:13:12
2025-04-19 04:15:39
| downloads
int64 0
118M
| likes
int64 0
4.86k
| README
stringlengths 30
1.01M
| matched_bigbio_names
listlengths 1
8
⌀ | is_bionlp
stringclasses 3
values | model_cards
stringlengths 0
1M
| metadata
stringlengths 2
698k
| source
stringclasses 2
values | matched_task
listlengths 1
10
⌀ | __index_level_0__
int64 0
46.9k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Kiran2004/Roberta_QCA_Squad
|
Kiran2004
|
question-answering
|
[
"transformers",
"tf",
"roberta",
"question-answering",
"generated_from_keras_callback",
"dataset:rajpurkar/squad",
"base_model:deepset/roberta-base-squad2",
"base_model:finetune:deepset/roberta-base-squad2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | 2024-04-14T05:41:42Z |
2024-04-24T14:45:56+00:00
| 20 | 0 |
---
base_model: deepset/roberta-base-squad2
datasets:
- rajpurkar/squad
license: cc-by-4.0
metrics:
- accuracy
- precision
- recall
- f1
tags:
- generated_from_keras_callback
model-index:
- name: Kiran2004/Roberta_QCA_Squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
## Model description
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on an [squad](https://huggingface.co/datasets/rajpurkar/squad) dataset.It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering for 6 Epochs.
It achieves the following results after training:
- Train Loss: 0.1434
- Validation Loss: 0.4821
## Model Training
- **Training Dataset**: [squad](https://huggingface.co/datasets/rajpurkar/squad)
- **Pretrained Model**: [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2)
## Evaluation
The model's performance can be evaluated using various metrics such as Accuracy, Recall, Precision, F1 score.
- Accuracy: 0.9100
- Precision: 0.9099
- Recall: 0.9099
- F1 Score: 0.9603
## Example Usage
```python
from transformers import pipeline
model_name = "Kiran2004/Roberta_QCA_Squad"
question_answerer = pipeline("question-answering", model = model_name)
question = "How many programming languages does BLOOM support?"
context = "BLOOM has 176 billion parameters and can generate text in 46 languages natural languages and 13 programming languages."
question_answerer(question=question, context=context)
```
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 250, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.5774 | 0.4305 | 0 |
| 0.3089 | 0.4597 | 1 |
| 0.2268 | 0.4541 | 2 |
| 0.1852 | 0.4718 | 3 |
| 0.1618 | 0.4821 | 4 |
| 0.1434 | 0.4821 | 5 |
### Framework versions
- Transformers 4.38.2
- TensorFlow 2.15.0
- Datasets 2.18.0
- Tokenizers 0.15.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
## Model description
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on an [squad](https://huggingface.co/datasets/rajpurkar/squad) dataset.It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering for 6 Epochs.
It achieves the following results after training:
- Train Loss: 0.1434
- Validation Loss: 0.4821
## Model Training
- **Training Dataset**: [squad](https://huggingface.co/datasets/rajpurkar/squad)
- **Pretrained Model**: [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2)
## Evaluation
The model's performance can be evaluated using various metrics such as Accuracy, Recall, Precision, F1 score.
- Accuracy: 0.9100
- Precision: 0.9099
- Recall: 0.9099
- F1 Score: 0.9603
## Example Usage
```python
from transformers import pipeline
model_name = "Kiran2004/Roberta_QCA_Squad"
question_answerer = pipeline("question-answering", model = model_name)
question = "How many programming languages does BLOOM support?"
context = "BLOOM has 176 billion parameters and can generate text in 46 languages natural languages and 13 programming languages."
question_answerer(question=question, context=context)
```
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 250, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.5774 | 0.4305 | 0 |
| 0.3089 | 0.4597 | 1 |
| 0.2268 | 0.4541 | 2 |
| 0.1852 | 0.4718 | 3 |
| 0.1618 | 0.4821 | 4 |
| 0.1434 | 0.4821 | 5 |
### Framework versions
- Transformers 4.38.2
- TensorFlow 2.15.0
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"base_model": "deepset/roberta-base-squad2", "datasets": ["rajpurkar/squad"], "license": "cc-by-4.0", "metrics": ["accuracy", "precision", "recall", "f1"], "tags": ["generated_from_keras_callback"], "model-index": [{"name": "Kiran2004/Roberta_QCA_Squad", "results": []}]}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,237 |
Savianto/qlora-mistral
|
Savianto
| null |
[
"safetensors",
"region:us"
] | 2024-10-09T06:36:19Z |
2024-10-09T16:45:10+00:00
| 0 | 0 |
---
{}
---
This model card provides details on a fine-tuned version of Savianto/qlora-mistral, a language model trained using the QLoRA technique on conversational data for enhanced text generation, particularly in question-answering and conversational tasks.
## Model Details
### Model Description
Model Description
This is a fine-tuned version of the Savianto/qlora-mistral model using the QLoRA technique. The fine-tuning was done to improve the model’s ability to generate coherent and context-aware responses in conversational and question-answering tasks. QLoRA allows for efficient fine-tuning of large models while optimizing for memory usage.
Developed by: Yash Sawant
Model type: Causal Language Model (AutoModelForCausalLM)
Language(s) (NLP): English
License: [Specify License Type Here]
Finetuned from model: Savianto/qlora-mistral
- **Developed by:** Yash Sawant
- **Model type:** Causal Language Model (AutoModelForCausalLM)
- **Language(s) (NLP):** English
- **Finetuned from model [optional]:** teknium/OpenHermes-2-Mistral-7B
## Uses
This model can be directly used for:
Question answering
Conversational agents (chatbots)
Text generation tasks (summarization, text completion)
### Direct Use
This model can be fine-tuned further for specific tasks such as:
Domain-specific question answering
Custom chatbot agents
Document summarization
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the fine-tuned model and tokenizer
model = AutoModelForCausalLM.from_pretrained("Savianto/qlora-mistral-finetuned")
tokenizer = AutoTokenizer.from_pretrained("Savianto/qlora-mistral-finetuned")
# Example prompt
prompt = "What is the capital of France?"
# Tokenize and generate output
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
output = model.generate(input_ids, max_length=50)
# Decode the response
response = tokenizer.decode(output[0], skip_special_tokens=True)
print(response)
[More Information Needed]
## Training Details
Training Details
Training Data
The model was fine-tuned using a conversational dataset, focusing on question-answer pairs and dialogue examples. This enhances the model's ability to generate contextually relevant and coherent responses.
Training Procedure
Hardware: GPU (NVIDIA A100, 40GB)
Training Time: 5 epochs with early stopping
Optimizer: AdamW
Learning Rate: 2e-5
Batch Size: 16
Training regime: Mixed Precision (fp16)
Preprocessing
Tokenized the input text with padding and truncation for consistent input lengths.
Speeds, Sizes, Times
Training Time: ~3 hours
Model Size: ~7B parameters (Base Model: Savianto/qlora-mistral)
Evaluation
Testing Data
The model was evaluated on a validation split of the fine-tuning dataset, with question-answer pairs and conversational exchanges.
Metrics
Perplexity: Evaluated using standard perplexity for text generation models.
Coherence: Human-evaluated coherence in generated responses.
Results
The model exhibited low perplexity scores on the validation set and performed well in conversational coherence during testing.
Summary
The model is well-suited for question-answering tasks, conversational agents, and general text generation tasks but may require additional tuning for domain-specific applications.
Model Examination
No further interpretability analysis was conducted on this model.
Environmental Impact
Carbon emissions for this model can be estimated using the Machine Learning Impact calculator based on the following parameters:
Hardware Type: NVIDIA A100
Training Hours: ~3 hours
Cloud Provider: Google Cloud
Compute Region: US-Central
Carbon Emitted: 0.98 kg CO2eq (estimated)
Technical Specifications
Model Architecture and Objective
This model is based on Mistral architecture, with the objective of generating coherent and contextually aware responses in conversation and question-answering tasks.
Compute Infrastructure
Hardware
NVIDIA A100 40GB GPU
Software
Python 3.8
Transformers (Hugging Face) v4.x
PyTorch 1.10+
Accelerate
| null |
Non_BioNLP
|
This model card provides details on a fine-tuned version of Savianto/qlora-mistral, a language model trained using the QLoRA technique on conversational data for enhanced text generation, particularly in question-answering and conversational tasks.
## Model Details
### Model Description
Model Description
This is a fine-tuned version of the Savianto/qlora-mistral model using the QLoRA technique. The fine-tuning was done to improve the model’s ability to generate coherent and context-aware responses in conversational and question-answering tasks. QLoRA allows for efficient fine-tuning of large models while optimizing for memory usage.
Developed by: Yash Sawant
Model type: Causal Language Model (AutoModelForCausalLM)
Language(s) (NLP): English
License: [Specify License Type Here]
Finetuned from model: Savianto/qlora-mistral
- **Developed by:** Yash Sawant
- **Model type:** Causal Language Model (AutoModelForCausalLM)
- **Language(s) (NLP):** English
- **Finetuned from model [optional]:** teknium/OpenHermes-2-Mistral-7B
## Uses
This model can be directly used for:
Question answering
Conversational agents (chatbots)
Text generation tasks (summarization, text completion)
### Direct Use
This model can be fine-tuned further for specific tasks such as:
Domain-specific question answering
Custom chatbot agents
Document summarization
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the fine-tuned model and tokenizer
model = AutoModelForCausalLM.from_pretrained("Savianto/qlora-mistral-finetuned")
tokenizer = AutoTokenizer.from_pretrained("Savianto/qlora-mistral-finetuned")
# Example prompt
prompt = "What is the capital of France?"
# Tokenize and generate output
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
output = model.generate(input_ids, max_length=50)
# Decode the response
response = tokenizer.decode(output[0], skip_special_tokens=True)
print(response)
[More Information Needed]
## Training Details
Training Details
Training Data
The model was fine-tuned using a conversational dataset, focusing on question-answer pairs and dialogue examples. This enhances the model's ability to generate contextually relevant and coherent responses.
Training Procedure
Hardware: GPU (NVIDIA A100, 40GB)
Training Time: 5 epochs with early stopping
Optimizer: AdamW
Learning Rate: 2e-5
Batch Size: 16
Training regime: Mixed Precision (fp16)
Preprocessing
Tokenized the input text with padding and truncation for consistent input lengths.
Speeds, Sizes, Times
Training Time: ~3 hours
Model Size: ~7B parameters (Base Model: Savianto/qlora-mistral)
Evaluation
Testing Data
The model was evaluated on a validation split of the fine-tuning dataset, with question-answer pairs and conversational exchanges.
Metrics
Perplexity: Evaluated using standard perplexity for text generation models.
Coherence: Human-evaluated coherence in generated responses.
Results
The model exhibited low perplexity scores on the validation set and performed well in conversational coherence during testing.
Summary
The model is well-suited for question-answering tasks, conversational agents, and general text generation tasks but may require additional tuning for domain-specific applications.
Model Examination
No further interpretability analysis was conducted on this model.
Environmental Impact
Carbon emissions for this model can be estimated using the Machine Learning Impact calculator based on the following parameters:
Hardware Type: NVIDIA A100
Training Hours: ~3 hours
Cloud Provider: Google Cloud
Compute Region: US-Central
Carbon Emitted: 0.98 kg CO2eq (estimated)
Technical Specifications
Model Architecture and Objective
This model is based on Mistral architecture, with the objective of generating coherent and contextually aware responses in conversation and question-answering tasks.
Compute Infrastructure
Hardware
NVIDIA A100 40GB GPU
Software
Python 3.8
Transformers (Hugging Face) v4.x
PyTorch 1.10+
Accelerate
|
{}
|
task
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 42,238 |
SEBIS/legal_t5_small_cls_cs
|
SEBIS
|
text2text-generation
|
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"classification Cszech model",
"dataset:jrc-acquis",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2021-06-23T10:27:21+00:00
| 175 | 1 |
---
datasets:
- jrc-acquis
language: Cszech
tags:
- classification Cszech model
widget:
- text: 'Bez námitek k navrhovanému spojení (Případ č. COMP/M.4169 – Virgin/CPW/JV)
(2006/C 103/16) (Text s významem pro EHP) Dne 29. března 2006 se Komise rozhodla
nevznést námitky proti výše uvedenému spojení a prohlásit ho za slučitelné se
společným trhem. Toto rozhodnutí je založeno na čl. 6 odst. 1 písm. b) nařízení
Rady (ES) č. 139/2004. Celý text rozhodnutí je přístupný pouze v angličtině a
bude uveřejněn poté, co bude zbaven obchodního tajemství, které může případně
obsahovat. Text bude dosažitelný: - na webové stránce Europa – hospodářská soutěž
(http://europa.eu.int/comm/competition/mergers/cases/). Tato webová stránka umožňuje
vyhledat jednotlivá rozhodnutí o spojení, a to včetně společnosti, čísla případu,
data a indexu odvětví hospodářství. - v elektronické podobě na webové stránce
EUR-Lex, pod dokumentem č. 32006M4169. EUR-Lex umožňuje přístup k Evropskému právu
přes Internet. (http://europa.eu.int/eur-lex/lex) --------------------------------------------------'
---
# legal_t5_small_cls_cs model
Model for classification of legal text written in Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis.
## Model description
legal_t5_small_cls_cs is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for classification of legal texts written in Cszech.
### How to use
Here is how to use this model to classify legal text written in Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_cls_cs"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_cls_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "Bez námitek k navrhovanému spojení (Případ č. COMP/M.4169 – Virgin/CPW/JV) (2006/C 103/16) (Text s významem pro EHP) Dne 29. března 2006 se Komise rozhodla nevznést námitky proti výše uvedenému spojení a prohlásit ho za slučitelné se společným trhem. Toto rozhodnutí je založeno na čl. 6 odst. 1 písm. b) nařízení Rady (ES) č. 139/2004. Celý text rozhodnutí je přístupný pouze v angličtině a bude uveřejněn poté, co bude zbaven obchodního tajemství, které může případně obsahovat. Text bude dosažitelný: - na webové stránce Europa – hospodářská soutěž (http://europa.eu.int/comm/competition/mergers/cases/). Tato webová stránka umožňuje vyhledat jednotlivá rozhodnutí o spojení, a to včetně společnosti, čísla případu, data a indexu odvětví hospodářství. - v elektronické podobě na webové stránce EUR-Lex, pod dokumentem č. 32006M4169. EUR-Lex umožňuje přístup k Evropskému právu přes Internet. (http://europa.eu.int/eur-lex/lex) --------------------------------------------------"
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_cls_cs model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 18 Thousand texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for classification test dataset, achieves the following results:
Test results :
| Model | F1 score |
|:-----:|:-----:|
| legal_t5_small_cls_cs | 0.6297|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
| null |
Non_BioNLP
|
# legal_t5_small_cls_cs model
Model for classification of legal text written in Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis.
## Model description
legal_t5_small_cls_cs is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for classification of legal texts written in Cszech.
### How to use
Here is how to use this model to classify legal text written in Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_cls_cs"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_cls_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "Bez námitek k navrhovanému spojení (Případ č. COMP/M.4169 – Virgin/CPW/JV) (2006/C 103/16) (Text s významem pro EHP) Dne 29. března 2006 se Komise rozhodla nevznést námitky proti výše uvedenému spojení a prohlásit ho za slučitelné se společným trhem. Toto rozhodnutí je založeno na čl. 6 odst. 1 písm. b) nařízení Rady (ES) č. 139/2004. Celý text rozhodnutí je přístupný pouze v angličtině a bude uveřejněn poté, co bude zbaven obchodního tajemství, které může případně obsahovat. Text bude dosažitelný: - na webové stránce Europa – hospodářská soutěž (http://europa.eu.int/comm/competition/mergers/cases/). Tato webová stránka umožňuje vyhledat jednotlivá rozhodnutí o spojení, a to včetně společnosti, čísla případu, data a indexu odvětví hospodářství. - v elektronické podobě na webové stránce EUR-Lex, pod dokumentem č. 32006M4169. EUR-Lex umožňuje přístup k Evropskému právu přes Internet. (http://europa.eu.int/eur-lex/lex) --------------------------------------------------"
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_cls_cs model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 18 Thousand texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for classification test dataset, achieves the following results:
Test results :
| Model | F1 score |
|:-----:|:-----:|
| legal_t5_small_cls_cs | 0.6297|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
{"datasets": ["jrc-acquis"], "language": "Cszech", "tags": ["classification Cszech model"], "widget": [{"text": "Bez námitek k navrhovanému spojení (Případ č. COMP/M.4169 – Virgin/CPW/JV) (2006/C 103/16) (Text s významem pro EHP) Dne 29. března 2006 se Komise rozhodla nevznést námitky proti výše uvedenému spojení a prohlásit ho za slučitelné se společným trhem. Toto rozhodnutí je založeno na čl. 6 odst. 1 písm. b) nařízení Rady (ES) č. 139/2004. Celý text rozhodnutí je přístupný pouze v angličtině a bude uveřejněn poté, co bude zbaven obchodního tajemství, které může případně obsahovat. Text bude dosažitelný: - na webové stránce Europa – hospodářská soutěž (http://europa.eu.int/comm/competition/mergers/cases/). Tato webová stránka umožňuje vyhledat jednotlivá rozhodnutí o spojení, a to včetně společnosti, čísla případu, data a indexu odvětví hospodářství. - v elektronické podobě na webové stránce EUR-Lex, pod dokumentem č. 32006M4169. EUR-Lex umožňuje přístup k Evropskému právu přes Internet. (http://europa.eu.int/eur-lex/lex) --------------------------------------------------"}]}
|
task
|
[
"TRANSLATION"
] | 42,239 |
hagsaeng/MachinLearningBootCamp_QAclassifier_Gemma-2B
|
hagsaeng
|
text-classification
|
[
"tensorboard",
"safetensors",
"text-classification",
"question-classification",
"LoRA",
"quantization",
"en",
"dataset:squad",
"dataset:glue",
"base_model:google/gemma-2b-it",
"base_model:finetune:google/gemma-2b-it",
"license:apache-2.0",
"region:us"
] | 2024-10-03T11:26:26Z |
2024-10-03T11:39:13+00:00
| 0 | 0 |
---
base_model: google/gemma-2b-it
datasets:
- squad
- glue
language: en
license: apache-2.0
metrics:
- accuracy
- f1
- precision
- recall
model_name: question-classification-lora-quant
tags:
- text-classification
- question-classification
- LoRA
- quantization
widget:
- text: What is the capital of France?
- text: This is a beautiful day.
---
# Model Card: Question Classification using LoRA with Quantization
## Model Overview
This model is a fine-tuned version of [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) designed to classify text into two categories: **QUESTION** or **NOT_QUESTION**. It was fine-tuned on a custom dataset that combines the **SQuAD** dataset (containing questions) and the **GLUE SST-2** dataset (containing general non-question sentences).
### Model Architecture
- Base Model: `google/gemma-2b-it`
- Fine-tuning Method: LoRA (Low-Rank Adaptation) with k-bit quantization (4-bit quantization with NF4).
- Configurations:
- Quantization: 4-bit quantization using `BitsAndBytesConfig`
- Adapter (LoRA) settings:
- Rank: 64
- LoRA Alpha: 32
- Dropout: 0.05
- Target Modules: `q_proj`, `k_proj`, `v_proj`, `o_proj`
## Dataset
The model was trained using a combination of two datasets:
- **SQuAD v1.1** (Question dataset)
- **GLUE SST-2** (Non-question dataset)
Each dataset was preprocessed to contain a label:
- **QUESTION**: For SQuAD questions
- **NOT_QUESTION**: For non-question sentences from GLUE SST-2.
### Data Preprocessing
- A random removal probability (`P_remove = 0.3`) was applied to remove some of the questions containing a question mark (`?`), to increase the model's robustness.
- Both datasets were balanced with an equal number of samples (`N=100` for training and testing).
## Model Performance
- **Metrics Evaluated**:
- Accuracy
- F1 Score
- Precision
- Recall
- These metrics were computed on a balanced test dataset containing both question and non-question examples.
## How to Use
You can use this model to classify whether a given text is a question or not. Here’s how you can use it:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("your_model_name")
model = AutoModelForSequenceClassification.from_pretrained("your_model_name")
inputs = tokenizer("What is the capital of France?", return_tensors="pt")
outputs = model(**inputs)
predictions = torch.argmax(outputs.logits, axis=1)
label = "QUESTION" if predictions == 1 else "NOT_QUESTION"
print(f"Predicted Label: {label}")
```
## Limitations
- The model was trained on English data only, so it may not perform well on non-English languages.
- Since it is fine-tuned on specific datasets (SQuAD and GLUE SST-2), performance may vary with out-of-domain data.
- The model assumes well-formed input sentences, so performance may degrade with informal or very short text.
## Intended Use
This model is intended for text classification tasks where distinguishing between questions and non-questions is needed. Potential use cases include:
- Improving chatbot or virtual assistant interactions.
- Enhancing query detection for search engines.
## License
This model follows the same license as [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it). Please refer to the original license for any usage restrictions.
| null |
Non_BioNLP
|
# Model Card: Question Classification using LoRA with Quantization
## Model Overview
This model is a fine-tuned version of [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) designed to classify text into two categories: **QUESTION** or **NOT_QUESTION**. It was fine-tuned on a custom dataset that combines the **SQuAD** dataset (containing questions) and the **GLUE SST-2** dataset (containing general non-question sentences).
### Model Architecture
- Base Model: `google/gemma-2b-it`
- Fine-tuning Method: LoRA (Low-Rank Adaptation) with k-bit quantization (4-bit quantization with NF4).
- Configurations:
- Quantization: 4-bit quantization using `BitsAndBytesConfig`
- Adapter (LoRA) settings:
- Rank: 64
- LoRA Alpha: 32
- Dropout: 0.05
- Target Modules: `q_proj`, `k_proj`, `v_proj`, `o_proj`
## Dataset
The model was trained using a combination of two datasets:
- **SQuAD v1.1** (Question dataset)
- **GLUE SST-2** (Non-question dataset)
Each dataset was preprocessed to contain a label:
- **QUESTION**: For SQuAD questions
- **NOT_QUESTION**: For non-question sentences from GLUE SST-2.
### Data Preprocessing
- A random removal probability (`P_remove = 0.3`) was applied to remove some of the questions containing a question mark (`?`), to increase the model's robustness.
- Both datasets were balanced with an equal number of samples (`N=100` for training and testing).
## Model Performance
- **Metrics Evaluated**:
- Accuracy
- F1 Score
- Precision
- Recall
- These metrics were computed on a balanced test dataset containing both question and non-question examples.
## How to Use
You can use this model to classify whether a given text is a question or not. Here’s how you can use it:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("your_model_name")
model = AutoModelForSequenceClassification.from_pretrained("your_model_name")
inputs = tokenizer("What is the capital of France?", return_tensors="pt")
outputs = model(**inputs)
predictions = torch.argmax(outputs.logits, axis=1)
label = "QUESTION" if predictions == 1 else "NOT_QUESTION"
print(f"Predicted Label: {label}")
```
## Limitations
- The model was trained on English data only, so it may not perform well on non-English languages.
- Since it is fine-tuned on specific datasets (SQuAD and GLUE SST-2), performance may vary with out-of-domain data.
- The model assumes well-formed input sentences, so performance may degrade with informal or very short text.
## Intended Use
This model is intended for text classification tasks where distinguishing between questions and non-questions is needed. Potential use cases include:
- Improving chatbot or virtual assistant interactions.
- Enhancing query detection for search engines.
## License
This model follows the same license as [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it). Please refer to the original license for any usage restrictions.
|
{"base_model": "google/gemma-2b-it", "datasets": ["squad", "glue"], "language": "en", "license": "apache-2.0", "metrics": ["accuracy", "f1", "precision", "recall"], "model_name": "question-classification-lora-quant", "tags": ["text-classification", "question-classification", "LoRA", "quantization"], "widget": [{"text": "What is the capital of France?"}, {"text": "This is a beautiful day."}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,241 |
Helsinki-NLP/opus-mt-en-zh
|
Helsinki-NLP
|
translation
|
[
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"marian",
"text2text-generation",
"translation",
"en",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2023-08-16T11:31:42+00:00
| 494,436 | 360 |
---
language:
- en
- zh
license: apache-2.0
tags:
- translation
---
### eng-zho
* source group: English
* target group: Chinese
* OPUS readme: [eng-zho](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zho/README.md)
* model: transformer
* source language(s): eng
* target language(s): cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant gan lzh lzh_Hans nan wuu yue yue_Hans yue_Hant
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.zip)
* test set translations: [opus-2020-07-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.test.txt)
* test set scores: [opus-2020-07-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.zho | 31.4 | 0.268 |
### System Info:
- hf_name: eng-zho
- source_languages: eng
- target_languages: zho
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zho/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'zh']
- src_constituents: {'eng'}
- tgt_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.test.txt
- src_alpha3: eng
- tgt_alpha3: zho
- short_pair: en-zh
- chrF2_score: 0.268
- bleu: 31.4
- brevity_penalty: 0.8959999999999999
- ref_len: 110468.0
- src_name: English
- tgt_name: Chinese
- train_date: 2020-07-17
- src_alpha2: en
- tgt_alpha2: zh
- prefer_old: False
- long_pair: eng-zho
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
| null |
Non_BioNLP
|
### eng-zho
* source group: English
* target group: Chinese
* OPUS readme: [eng-zho](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zho/README.md)
* model: transformer
* source language(s): eng
* target language(s): cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant gan lzh lzh_Hans nan wuu yue yue_Hans yue_Hant
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.zip)
* test set translations: [opus-2020-07-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.test.txt)
* test set scores: [opus-2020-07-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.zho | 31.4 | 0.268 |
### System Info:
- hf_name: eng-zho
- source_languages: eng
- target_languages: zho
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zho/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'zh']
- src_constituents: {'eng'}
- tgt_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.test.txt
- src_alpha3: eng
- tgt_alpha3: zho
- short_pair: en-zh
- chrF2_score: 0.268
- bleu: 31.4
- brevity_penalty: 0.8959999999999999
- ref_len: 110468.0
- src_name: English
- tgt_name: Chinese
- train_date: 2020-07-17
- src_alpha2: en
- tgt_alpha2: zh
- prefer_old: False
- long_pair: eng-zho
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
{"language": ["en", "zh"], "license": "apache-2.0", "tags": ["translation"]}
|
task
|
[
"TRANSLATION"
] | 42,242 |
MrLight/dse-phi35-vidore-ft
|
MrLight
| null |
[
"Tevatron",
"pytorch",
"phi3_v",
"vidore",
"custom_code",
"en",
"dataset:Tevatron/docmatix-ir",
"dataset:HuggingFaceM4/Docmatix",
"dataset:Tevatron/msmarco-passage-aug",
"dataset:vidore/colpali_train_set",
"dataset:Tevatron/wiki-ss-nq",
"arxiv:2406.11251",
"license:mit",
"region:us"
] | 2024-09-05T15:09:53Z |
2024-09-07T04:21:15+00:00
| 333 | 10 |
---
datasets:
- Tevatron/docmatix-ir
- HuggingFaceM4/Docmatix
- Tevatron/msmarco-passage-aug
- vidore/colpali_train_set
- Tevatron/wiki-ss-nq
language:
- en
library_name: Tevatron
license: mit
tags:
- vidore
---
# DSE-Phi35-Vidore-ft
DSE-Phi3-Vidore-ft is a bi-encoder model designed to encode document screenshots into dense vectors for document retrieval. The Document Screenshot Embedding ([DSE](https://arxiv.org/abs/2406.11251)) approach captures documents in their original visual format, preserving all information such as text, images, and layout, thus avoiding tedious parsing and potential information loss.
The model, `Tevatron/dse-phi35-vidore-ft`, is trained using 1/10 of the `Tevatron/docmatix-ir` dataset, a variant of `HuggingFaceM4/Docmatix` specifically adapted for training PDF retrievers with Vision Language Models in open-domain question answering scenarios. For more information on dataset filtering and hard negative mining, refer to the [docmatix-ir](https://huggingface.co/datasets/Tevatron/docmatix-ir/blob/main/README.md) dataset page.
Followed by finetuning on the (vidore)[https://huggingface.co/datasets/vidore/colpali_train_set] training set. The checkpoint is warmed up by text retrieval and webpage retrieval.
For example, DSE-Phi3-Vidore-V2 achieves **82.9** nDCG@5 on [ViDoRE](https://huggingface.co/spaces/vidore/vidore-leaderboard) leaderboard.
## How to train the model from scratch
Please see https://github.com/texttron/tevatron/tree/main/examples/dse
## How to Use the Model
### Load the Model and Processor
```python
import torch
from transformers import AutoProcessor, AutoModelForCausalLM
processor = AutoProcessor.from_pretrained('MrLight/dse-phi35-vidore-ft', trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained('MrLight/dse-phi35-vidore-ft', trust_remote_code=True, attn_implementation="flash_attention_2", torch_dtype=torch.bfloat16, use_cache=False).to('cuda:0')
def get_embedding(last_hidden_state: torch.Tensor, attention_mask: torch.Tensor) -> torch.Tensor:
sequence_lengths = attention_mask.sum(dim=1) - 1
bs = last_hidden_state.shape[0]
reps = last_hidden_state[torch.arange(bs, device=last_hidden_state.device), sequence_lengths]
reps = torch.nn.functional.normalize(reps, p=2, dim=-1)
return reps
```
### Encode Text Query
```python
queries = ["query: Where can we see Llama?</s>", "query: What is LLaMA model?</s>"]
query_inputs = processor(queries, return_tensors="pt", padding="longest", max_length=128, truncation=True).to('cuda:0')
with torch.no_grad():
output = model(**query_inputs, return_dict=True, output_hidden_states=True)
query_embeddings = get_embedding(output.hidden_states[-1], query_inputs["attention_mask"])
```
### Encode Document Screenshot
```python
from PIL import Image
import requests
from io import BytesIO
# URLs of the images
url1 = "https://huggingface.co/Tevatron/dse-phi3-docmatix-v2/resolve/main/animal-llama.png"
url2 = "https://huggingface.co/Tevatron/dse-phi3-docmatix-v2/resolve/main/meta-llama.png"
# Download and open images
response1 = requests.get(url1)
response2 = requests.get(url2)
passage_image1 = Image.open(BytesIO(response1.content)).resize((1344, 1344))
passage_image2 = Image.open(BytesIO(response2.content)).resize((1344, 1344))
passage_images = [passage_image1, passage_image2]
passage_prompts = ["<|image_1|>\nWhat is shown in this image?</s>", "<|image_2|>\nWhat is shown in this image?</s>"]
# Process inputs and get embeddings
passage_inputs = processor(passage_prompts, images=passage_images, return_tensors="pt", padding="longest", max_length=4096, truncation=True).to('cuda:0')
passage_inputs['input_ids'] = passage_inputs['input_ids'].squeeze(0)
passage_inputs['attention_mask'] = passage_inputs['attention_mask'].squeeze(0)
passage_inputs['image_sizes'] = passage_inputs['image_sizes'].squeeze(0)
with torch.no_grad():
output = model(**passage_inputs, return_dict=True, output_hidden_states=True)
doc_embeddings = get_embedding(output.hidden_states[-1], passage_inputs["attention_mask"])
```
### Compute Similarity
```python
from torch.nn.functional import cosine_similarity
num_queries = query_embeddings.size(0)
num_passages = doc_embeddings.size(0)
for i in range(num_queries):
query_embedding = query_embeddings[i].unsqueeze(0)
similarities = cosine_similarity(query_embedding, doc_embeddings)
print(f"Similarities for Query {i+1}: {similarities.cpu().float().numpy()}")
```
### Encode Document Text
This DSE checkpoint is warm-up with `Tevatron/msmarco-passage-aug`, thus the model can also effectively encode document as text input.
```python
passage_prompts = [
"The llama (/ˈlɑːmə/; Spanish pronunciation: [ˈʎama] or [ˈʝama]) (Lama glama) is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the pre-Columbian era.</s>",
"Llama (acronym for Large Language Model Meta AI, and formerly stylized as LLaMA) is a family of autoregressive large language models (LLMs) released by Meta AI starting in February 2023.[2][3] The latest version is Llama 3.1, released in July 2024.[4]</s>"
]
passage_inputs = processor(passage_prompts, images=None, return_tensors="pt", padding="longest", max_length=4096, truncation=True).to('cuda:0')
with torch.no_grad():
output = model(**passage_inputs, return_dict=True, output_hidden_states=True)
doc_embeddings = get_embedding(output.hidden_states[-1], passage_inputs["attention_mask"])
for i in range(num_queries):
query_embedding = query_embeddings[i].unsqueeze(0)
similarities = cosine_similarity(query_embedding, doc_embeddings)
print(f"Similarities for Query {i+1}: {similarities.cpu().float().numpy()}")
```
### Citation
If you find this checkpoint is helpful, please consider cite Phi3, Docmatix and ViDoRe and our DSE work.
| null |
Non_BioNLP
|
# DSE-Phi35-Vidore-ft
DSE-Phi3-Vidore-ft is a bi-encoder model designed to encode document screenshots into dense vectors for document retrieval. The Document Screenshot Embedding ([DSE](https://arxiv.org/abs/2406.11251)) approach captures documents in their original visual format, preserving all information such as text, images, and layout, thus avoiding tedious parsing and potential information loss.
The model, `Tevatron/dse-phi35-vidore-ft`, is trained using 1/10 of the `Tevatron/docmatix-ir` dataset, a variant of `HuggingFaceM4/Docmatix` specifically adapted for training PDF retrievers with Vision Language Models in open-domain question answering scenarios. For more information on dataset filtering and hard negative mining, refer to the [docmatix-ir](https://huggingface.co/datasets/Tevatron/docmatix-ir/blob/main/README.md) dataset page.
Followed by finetuning on the (vidore)[https://huggingface.co/datasets/vidore/colpali_train_set] training set. The checkpoint is warmed up by text retrieval and webpage retrieval.
For example, DSE-Phi3-Vidore-V2 achieves **82.9** nDCG@5 on [ViDoRE](https://huggingface.co/spaces/vidore/vidore-leaderboard) leaderboard.
## How to train the model from scratch
Please see https://github.com/texttron/tevatron/tree/main/examples/dse
## How to Use the Model
### Load the Model and Processor
```python
import torch
from transformers import AutoProcessor, AutoModelForCausalLM
processor = AutoProcessor.from_pretrained('MrLight/dse-phi35-vidore-ft', trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained('MrLight/dse-phi35-vidore-ft', trust_remote_code=True, attn_implementation="flash_attention_2", torch_dtype=torch.bfloat16, use_cache=False).to('cuda:0')
def get_embedding(last_hidden_state: torch.Tensor, attention_mask: torch.Tensor) -> torch.Tensor:
sequence_lengths = attention_mask.sum(dim=1) - 1
bs = last_hidden_state.shape[0]
reps = last_hidden_state[torch.arange(bs, device=last_hidden_state.device), sequence_lengths]
reps = torch.nn.functional.normalize(reps, p=2, dim=-1)
return reps
```
### Encode Text Query
```python
queries = ["query: Where can we see Llama?</s>", "query: What is LLaMA model?</s>"]
query_inputs = processor(queries, return_tensors="pt", padding="longest", max_length=128, truncation=True).to('cuda:0')
with torch.no_grad():
output = model(**query_inputs, return_dict=True, output_hidden_states=True)
query_embeddings = get_embedding(output.hidden_states[-1], query_inputs["attention_mask"])
```
### Encode Document Screenshot
```python
from PIL import Image
import requests
from io import BytesIO
# URLs of the images
url1 = "https://huggingface.co/Tevatron/dse-phi3-docmatix-v2/resolve/main/animal-llama.png"
url2 = "https://huggingface.co/Tevatron/dse-phi3-docmatix-v2/resolve/main/meta-llama.png"
# Download and open images
response1 = requests.get(url1)
response2 = requests.get(url2)
passage_image1 = Image.open(BytesIO(response1.content)).resize((1344, 1344))
passage_image2 = Image.open(BytesIO(response2.content)).resize((1344, 1344))
passage_images = [passage_image1, passage_image2]
passage_prompts = ["<|image_1|>\nWhat is shown in this image?</s>", "<|image_2|>\nWhat is shown in this image?</s>"]
# Process inputs and get embeddings
passage_inputs = processor(passage_prompts, images=passage_images, return_tensors="pt", padding="longest", max_length=4096, truncation=True).to('cuda:0')
passage_inputs['input_ids'] = passage_inputs['input_ids'].squeeze(0)
passage_inputs['attention_mask'] = passage_inputs['attention_mask'].squeeze(0)
passage_inputs['image_sizes'] = passage_inputs['image_sizes'].squeeze(0)
with torch.no_grad():
output = model(**passage_inputs, return_dict=True, output_hidden_states=True)
doc_embeddings = get_embedding(output.hidden_states[-1], passage_inputs["attention_mask"])
```
### Compute Similarity
```python
from torch.nn.functional import cosine_similarity
num_queries = query_embeddings.size(0)
num_passages = doc_embeddings.size(0)
for i in range(num_queries):
query_embedding = query_embeddings[i].unsqueeze(0)
similarities = cosine_similarity(query_embedding, doc_embeddings)
print(f"Similarities for Query {i+1}: {similarities.cpu().float().numpy()}")
```
### Encode Document Text
This DSE checkpoint is warm-up with `Tevatron/msmarco-passage-aug`, thus the model can also effectively encode document as text input.
```python
passage_prompts = [
"The llama (/ˈlɑːmə/; Spanish pronunciation: [ˈʎama] or [ˈʝama]) (Lama glama) is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the pre-Columbian era.</s>",
"Llama (acronym for Large Language Model Meta AI, and formerly stylized as LLaMA) is a family of autoregressive large language models (LLMs) released by Meta AI starting in February 2023.[2][3] The latest version is Llama 3.1, released in July 2024.[4]</s>"
]
passage_inputs = processor(passage_prompts, images=None, return_tensors="pt", padding="longest", max_length=4096, truncation=True).to('cuda:0')
with torch.no_grad():
output = model(**passage_inputs, return_dict=True, output_hidden_states=True)
doc_embeddings = get_embedding(output.hidden_states[-1], passage_inputs["attention_mask"])
for i in range(num_queries):
query_embedding = query_embeddings[i].unsqueeze(0)
similarities = cosine_similarity(query_embedding, doc_embeddings)
print(f"Similarities for Query {i+1}: {similarities.cpu().float().numpy()}")
```
### Citation
If you find this checkpoint is helpful, please consider cite Phi3, Docmatix and ViDoRe and our DSE work.
|
{"datasets": ["Tevatron/docmatix-ir", "HuggingFaceM4/Docmatix", "Tevatron/msmarco-passage-aug", "vidore/colpali_train_set", "Tevatron/wiki-ss-nq"], "language": ["en"], "library_name": "Tevatron", "license": "mit", "tags": ["vidore"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,243 |
huyhuy123/paraphrase-vietnamese-law-1
|
huyhuy123
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"dataset_size:10K<n<100K",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-09-22T05:34:40Z |
2024-09-22T05:46:15+00:00
| 12 | 0 |
---
base_model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2
language: []
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dataset_size:10K<n<100K
- loss:CosineSimilarityLoss
widget:
- source_sentence: Có những loại chứng thực nào?
sentences:
- Chứng cứ trong vụ án hình sự được hiểu như thế nào?
- Thời điểm hợp đồng cầm cố tài sản có hiệu lực là khi nào?
- Viên chức chuyên ngành công tác xã hội có nhiệm vụ gì?
- source_sentence: Premic vitamin là hỗn hợp gì?
sentences:
- 04 mức bồi dưỡng bằng hiện vật cho người lao động năm 2024?
- Thủ tục đăng ký khai tử cho người Việt Nam ở trong nước
- Trường hợp nào được mua pháo hoa của Bộ Quốc phòng về sử dụng?
- source_sentence: Khi nào thẻ CCCD sẽ được cấp lại?
sentences:
- Sai sót về thông tin trên thẻ CCCD được cấp đổi hay cấp lại?
- 1. Dùng bảo hiểm xe máy online có vi phạm pháp luật không?
- Bản chất của chủ nghĩa xã hội ở Việt Nam là gì?
- source_sentence: Mẫu đơn cam kết cho ở nhờ 2024?
sentences:
- Cách đăng ký tạm trú online tại nhà dễ thực hiện nhất 2023?
- Hồ sơ thiết kế xe ô tô cải tạo do cơ quan nào thẩm định?
- Thủ tục thu hồi giấy phép kinh doanh vận tải bằng xe ô tô như thế nào?
- source_sentence: Nhập hộ khẩu cho con ở đâu?
sentences:
- Xác định quê quán đối với trẻ em bị bỏ rơi như thế nào?
- Kéo đường dây điện qua đất của người khác có được không?
- Đặc quyền của xe biển số đỏ là gì?
model-index:
- name: SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: Unknown
type: unknown
metrics:
- type: pearson_cosine
value: 0.9803795255081482
name: Pearson Cosine
- type: spearman_cosine
value: 0.8645230973533456
name: Spearman Cosine
- type: pearson_manhattan
value: 0.9726972909054996
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8644864331598612
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.9728920367615165
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8645176892999815
name: Spearman Euclidean
- type: pearson_dot
value: 0.9583904271234501
name: Pearson Dot
- type: spearman_dot
value: 0.861105909034599
name: Spearman Dot
- type: pearson_max
value: 0.9803795255081482
name: Pearson Max
- type: spearman_max
value: 0.8645230973533456
name: Spearman Max
---
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) <!-- at revision 79f2382ceacceacdf38563d7c5d16b9ff8d725d6 -->
- **Maximum Sequence Length:** 300 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("minhquan6203/paraphrase-vietnamese-law")
# Run inference
sentences = [
'Nhập hộ khẩu cho con ở đâu?',
'Xác định quê quán đối với trẻ em bị bỏ rơi như thế nào?',
'Kéo đường dây điện qua đất của người khác có được không?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:-------------------|:-----------|
| pearson_cosine | 0.9804 |
| spearman_cosine | 0.8645 |
| pearson_manhattan | 0.9727 |
| spearman_manhattan | 0.8645 |
| pearson_euclidean | 0.9729 |
| spearman_euclidean | 0.8645 |
| pearson_dot | 0.9584 |
| spearman_dot | 0.8611 |
| pearson_max | 0.9804 |
| **spearman_max** | **0.8645** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 78,240 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 7 tokens</li><li>mean: 20.73 tokens</li><li>max: 54 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 185.68 tokens</li><li>max: 300 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:----------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>Điều khiển xe ô tô tải chở hàng vượt trọng tải cho phép được ghi trong Giấy đăng kiểm từ 30% - 50% thì bị xử phạt thế nào?</code> | <code>Tại khoản 2 Điều 16 Nghị định 65/2023/NĐ-CP có quy định về chi phí phải nộp khi yêu cầu sửa đổi, bổ sung đơn đăng ký sở hữu công nghiệp như sau: Như vậy, người yêu cầu sửa đổi, bổ sung đơn đăng ký sở hữu công nghiệp phải nộp các chi phí bao gồm: - Phí thẩm định yêu cầu sửa đổi, bổ sung cho mỗi nội dung sửa đổi theo quy định và bản sao chứng từ nộp phí nếu nộp phí qua dịch vụ bưu chính hoặc nộp trực tiếp vào tài khoản của cơ quan quản lý nhà nước về quyền sở hữu công nghiệp; - Phí công bố thông tin sửa đổi, bổ sung đơn theo quy định nếu nội dung sửa đổi, bổ sung phải được công bố trong trường hợp yêu cầu sửa đổi, bổ sung thông tin liên quan đến: Đơn hợp lệ về mặt hình thức ghi trong quyết định chấp nhận đơn hợp lệ; tên, quốc tịch của tác giả sáng chế, kiểu dáng công nghiệp, thiết kế bố trí; Bản tóm tắt sáng chế kèm theo hình vẽ (nếu có); bộ ảnh chụp hoặc bản vẽ kiểu dáng công nghiệp; mẫu nhãn hiệu và danh mục hàng hóa, dịch vụ kèm theo; bản mô tả tính chất đặc thù của sản phẩm mang chỉ dẫn địa lý và tên sản phẩm mang chỉ dẫn địa lý - Trường hợp việc sửa đổi, bổ sung phải thực hiện để khắc phục những sai sót do lỗi của cơ quan quản lý nhà nước về quyền sở hữu công nghiệp, người nộp đơn không phải nộp phí công bố. Trân trọng!</code> | <code>0.0</code> |
| <code>Hợp đồng nghiên cứu khoa học và phát triển công nghệ chấm dứt khi nào?</code> | <code>Căn cứ Tiểu mục 3 Mục 2 Nghị quyết 27-NQ/TW năm 2018 quy định bảng lương mới của cán bộ, công chức, viên chức và lực lượng vũ trang sau cải cách tiền lương như sau: (1) Thiết kế cơ cấu tiền lương mới gồm: - Lương cơ bản (chiếm tỷ lệ trong khoảng 70% tổng quỹ lương) và các khoản phụ cấp (chiếm khoảng 30% tổng quỹ lương). - Bổ sung tiền thưởng (quỹ tiền thưởng tỷ lệ trong khoảng 10% tổng quỹ tiền lương của năm, không bao gồm phụ cấp). (2) Xây dựng, ban hành hệ thống bảng lương mới theo vị trí việc làm, chức danh và chức vụ lãnh đạo thay thế hệ thống bảng lương hiện hành; chuyển xếp lương cũ sang lương mới, bảo đảm không thấp hơn tiền lương hiện hưởng, gồm: - Xây dựng 1 bảng lương chức vụ áp dụng đối với cán bộ, công chức, viên chức giữ chức vụ lãnh đạo (bầu cử và bổ nhiệm) trong hệ thống chính trị từ Trung ương đến cấp xã theo nguyên tắc: + Mức lương chức vụ phải thể hiện thứ bậc trong hệ thống chính trị; giữ chức vụ lãnh đạo nào thì hưởng lương theo chức vụ đó, nếu một người giữ nhiều chức vụ thì hưởng một mức lương chức vụ cao nhất; giữ chức vụ lãnh đạo tương đương nhau thì hưởng mức lương chức vụ như nhau; mức lương chức vụ của người lãnh đạo cấp trên phải cao hơn mức lương chức vụ của người lãnh đạo cấp dưới; + Quy định một mức lương chức vụ cho mỗi loại chức vụ tương đương; không phân loại bộ, ngành, ban, uỷ ban và tương đương ở Trung ương khi xây dựng bảng lương chức vụ ở Trung ương; không phân biệt mức lương chức vụ khác nhau đối với cùng chức danh lãnh đạo theo phân loại đơn vị hành chính ở địa phương mà thực hiện bằng chế độ phụ cấp. + Việc phân loại chức vụ lãnh đạo tương đương trong hệ thống chính trị để thiết kế bảng lương chức vụ do Bộ Chính trị quyết định sau khi đã báo cáo Ban Chấp hành Trung ương. - Xây dựng 1 bảng lương chuyên môn, nghiệp vụ theo ngạch công chức và chức danh nghề nghiệp viên chức áp dụng chung đối với công chức, viên chức không giữ chức danh lãnh đạo; mỗi ngạch công chức, chức danh nghề nghiệp viên chức có nhiều bậc lương theo nguyên tắc: + Cùng mức độ phức tạp công việc thì mức lương như nhau; điều kiện lao động cao hơn bình thường và ưu đãi nghề thì thực hiện bằng chế độ phụ cấp theo nghề; sắp xếp lại nhóm ngạch và số bậc trong các ngạch công chức, chức danh nghề nghiệp viên chức, khuyến khích công chức, viên chức nâng cao trình độ chuyên môn, nghiệp vụ. + Việc bổ nhiệm vào ngạch công chức hoặc chức danh nghề nghiệp viên chức phải gắn với vị trí việc làm và cơ cấu ngạch công chức, chức danh nghề nghiệp viên chức do cơ quan, tổ chức, đơn vị quản lý công chức, viên chức thực hiện. - Xây dựng 3 bảng lương đối với lực lượng vũ trang, gồm: + 1 bảng lương sĩ quan quân đội, sĩ quan, hạ sĩ quan nghiệp vụ công an. + 1 bảng lương quân nhân chuyên nghiệp, chuyên môn kỹ thuật công an; + 1 bảng lương công nhân quốc phòng, công nhân công an.</code> | <code>0.0</code> |
| <code>Chưa đăng ký kết hôn có quyền bắt buộc người ba cấp dưỡng cho con được hay không?</code> | <code>Các nguyên tắc, trình tự xử lý kỷ luật lao động được quy định tại Điều 122 Bộ luật Lao động 2019, cụ thể: (1) Việc xử lý kỷ luật lao động được quy định như sau: - Người sử dụng lao động phải chứng minh được lỗi của người lao động; - Phải có sự tham gia của tổ chức đại diện người lao động tại cơ sở mà người lao động đang bị xử lý kỷ luật là thành viên; - Người lao động phải có mặt và có quyền tự bào chữa, nhờ luật sư hoặc tổ chức đại diện người lao động bào chữa; trường hợp là người chưa đủ 15 tuổi thì phải có sự tham gia của người đại diện theo pháp luật; - Việc xử lý kỷ luật lao động phải được ghi thành biên bản. (2) Không được áp dụng nhiều hình thức xử lý kỷ luật lao động đối với một hành vi vi phạm kỷ luật lao động. (3) Khi một người lao động đồng thời có nhiều hành vi vi phạm kỷ luật lao động thì chỉ áp dụng hình thức kỷ luật cao nhất tương ứng với hành vi vi phạm nặng nhất. (4) Không được xử lý kỷ luật lao động đối với người lao động đang trong thời gian sau đây: - Nghỉ ốm đau, điều dưỡng; nghỉ việc được sự đồng ý của người sử dụng lao động; - Đang bị tạm giữ, tạm giam; - Đang chờ kết quả của cơ quan có thẩm quyền điều tra xác minh và kết luận đối với hành vi vi phạm được quy định tại khoản 1 và khoản 2 Điều 125 Bộ luật Lao động 2019 - Người lao động nữ mang thai; người lao động nghỉ thai sản, nuôi con dưới 12 tháng tuổi. (5) Không xử lý kỷ luật lao động đối với người lao động vi phạm kỷ luật lao động trong khi mắc bệnh tâm thần hoặc một bệnh khác làm mất khả năng nhận thức hoặc khả năng điều khiển hành vi của mình. Trân trọng!</code> | <code>0.0</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `num_train_epochs`: 5
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss | spearman_max |
|:------:|:-----:|:-------------:|:------------:|
| 0.2045 | 500 | 0.0352 | - |
| 0.4090 | 1000 | 0.0208 | 0.8642 |
| 0.6135 | 1500 | 0.0181 | - |
| 0.8180 | 2000 | 0.0171 | 0.8645 |
| 1.0 | 2445 | - | 0.8645 |
| 1.0225 | 2500 | 0.0148 | - |
| 1.2270 | 3000 | 0.012 | 0.8648 |
| 1.4315 | 3500 | 0.0105 | - |
| 1.6360 | 4000 | 0.0093 | 0.8646 |
| 1.8405 | 4500 | 0.0092 | - |
| 2.0 | 4890 | - | 0.8646 |
| 2.0450 | 5000 | 0.0082 | 0.8647 |
| 2.2495 | 5500 | 0.0069 | - |
| 2.4540 | 6000 | 0.0062 | 0.8644 |
| 2.6585 | 6500 | 0.0058 | - |
| 2.8630 | 7000 | 0.0056 | 0.8646 |
| 3.0 | 7335 | - | 0.8644 |
| 3.0675 | 7500 | 0.0048 | - |
| 3.2720 | 8000 | 0.0046 | 0.8646 |
| 3.4765 | 8500 | 0.0039 | - |
| 3.6810 | 9000 | 0.0039 | 0.8645 |
| 3.8855 | 9500 | 0.0034 | - |
| 4.0 | 9780 | - | 0.8643 |
| 4.0900 | 10000 | 0.0033 | 0.8644 |
| 4.2945 | 10500 | 0.003 | - |
| 4.4990 | 11000 | 0.0028 | 0.8644 |
| 4.7035 | 11500 | 0.0029 | - |
| 4.9080 | 12000 | 0.0025 | 0.8645 |
| 5.0 | 12225 | - | 0.8645 |
### Framework Versions
- Python: 3.10.13
- Sentence Transformers: 3.0.0
- Transformers: 4.39.3
- PyTorch: 2.1.2
- Accelerate: 0.29.3
- Datasets: 2.18.0
- Tokenizers: 0.15.2
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) <!-- at revision 79f2382ceacceacdf38563d7c5d16b9ff8d725d6 -->
- **Maximum Sequence Length:** 300 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("minhquan6203/paraphrase-vietnamese-law")
# Run inference
sentences = [
'Nhập hộ khẩu cho con ở đâu?',
'Xác định quê quán đối với trẻ em bị bỏ rơi như thế nào?',
'Kéo đường dây điện qua đất của người khác có được không?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:-------------------|:-----------|
| pearson_cosine | 0.9804 |
| spearman_cosine | 0.8645 |
| pearson_manhattan | 0.9727 |
| spearman_manhattan | 0.8645 |
| pearson_euclidean | 0.9729 |
| spearman_euclidean | 0.8645 |
| pearson_dot | 0.9584 |
| spearman_dot | 0.8611 |
| pearson_max | 0.9804 |
| **spearman_max** | **0.8645** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 78,240 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 7 tokens</li><li>mean: 20.73 tokens</li><li>max: 54 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 185.68 tokens</li><li>max: 300 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:----------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>Điều khiển xe ô tô tải chở hàng vượt trọng tải cho phép được ghi trong Giấy đăng kiểm từ 30% - 50% thì bị xử phạt thế nào?</code> | <code>Tại khoản 2 Điều 16 Nghị định 65/2023/NĐ-CP có quy định về chi phí phải nộp khi yêu cầu sửa đổi, bổ sung đơn đăng ký sở hữu công nghiệp như sau: Như vậy, người yêu cầu sửa đổi, bổ sung đơn đăng ký sở hữu công nghiệp phải nộp các chi phí bao gồm: - Phí thẩm định yêu cầu sửa đổi, bổ sung cho mỗi nội dung sửa đổi theo quy định và bản sao chứng từ nộp phí nếu nộp phí qua dịch vụ bưu chính hoặc nộp trực tiếp vào tài khoản của cơ quan quản lý nhà nước về quyền sở hữu công nghiệp; - Phí công bố thông tin sửa đổi, bổ sung đơn theo quy định nếu nội dung sửa đổi, bổ sung phải được công bố trong trường hợp yêu cầu sửa đổi, bổ sung thông tin liên quan đến: Đơn hợp lệ về mặt hình thức ghi trong quyết định chấp nhận đơn hợp lệ; tên, quốc tịch của tác giả sáng chế, kiểu dáng công nghiệp, thiết kế bố trí; Bản tóm tắt sáng chế kèm theo hình vẽ (nếu có); bộ ảnh chụp hoặc bản vẽ kiểu dáng công nghiệp; mẫu nhãn hiệu và danh mục hàng hóa, dịch vụ kèm theo; bản mô tả tính chất đặc thù của sản phẩm mang chỉ dẫn địa lý và tên sản phẩm mang chỉ dẫn địa lý - Trường hợp việc sửa đổi, bổ sung phải thực hiện để khắc phục những sai sót do lỗi của cơ quan quản lý nhà nước về quyền sở hữu công nghiệp, người nộp đơn không phải nộp phí công bố. Trân trọng!</code> | <code>0.0</code> |
| <code>Hợp đồng nghiên cứu khoa học và phát triển công nghệ chấm dứt khi nào?</code> | <code>Căn cứ Tiểu mục 3 Mục 2 Nghị quyết 27-NQ/TW năm 2018 quy định bảng lương mới của cán bộ, công chức, viên chức và lực lượng vũ trang sau cải cách tiền lương như sau: (1) Thiết kế cơ cấu tiền lương mới gồm: - Lương cơ bản (chiếm tỷ lệ trong khoảng 70% tổng quỹ lương) và các khoản phụ cấp (chiếm khoảng 30% tổng quỹ lương). - Bổ sung tiền thưởng (quỹ tiền thưởng tỷ lệ trong khoảng 10% tổng quỹ tiền lương của năm, không bao gồm phụ cấp). (2) Xây dựng, ban hành hệ thống bảng lương mới theo vị trí việc làm, chức danh và chức vụ lãnh đạo thay thế hệ thống bảng lương hiện hành; chuyển xếp lương cũ sang lương mới, bảo đảm không thấp hơn tiền lương hiện hưởng, gồm: - Xây dựng 1 bảng lương chức vụ áp dụng đối với cán bộ, công chức, viên chức giữ chức vụ lãnh đạo (bầu cử và bổ nhiệm) trong hệ thống chính trị từ Trung ương đến cấp xã theo nguyên tắc: + Mức lương chức vụ phải thể hiện thứ bậc trong hệ thống chính trị; giữ chức vụ lãnh đạo nào thì hưởng lương theo chức vụ đó, nếu một người giữ nhiều chức vụ thì hưởng một mức lương chức vụ cao nhất; giữ chức vụ lãnh đạo tương đương nhau thì hưởng mức lương chức vụ như nhau; mức lương chức vụ của người lãnh đạo cấp trên phải cao hơn mức lương chức vụ của người lãnh đạo cấp dưới; + Quy định một mức lương chức vụ cho mỗi loại chức vụ tương đương; không phân loại bộ, ngành, ban, uỷ ban và tương đương ở Trung ương khi xây dựng bảng lương chức vụ ở Trung ương; không phân biệt mức lương chức vụ khác nhau đối với cùng chức danh lãnh đạo theo phân loại đơn vị hành chính ở địa phương mà thực hiện bằng chế độ phụ cấp. + Việc phân loại chức vụ lãnh đạo tương đương trong hệ thống chính trị để thiết kế bảng lương chức vụ do Bộ Chính trị quyết định sau khi đã báo cáo Ban Chấp hành Trung ương. - Xây dựng 1 bảng lương chuyên môn, nghiệp vụ theo ngạch công chức và chức danh nghề nghiệp viên chức áp dụng chung đối với công chức, viên chức không giữ chức danh lãnh đạo; mỗi ngạch công chức, chức danh nghề nghiệp viên chức có nhiều bậc lương theo nguyên tắc: + Cùng mức độ phức tạp công việc thì mức lương như nhau; điều kiện lao động cao hơn bình thường và ưu đãi nghề thì thực hiện bằng chế độ phụ cấp theo nghề; sắp xếp lại nhóm ngạch và số bậc trong các ngạch công chức, chức danh nghề nghiệp viên chức, khuyến khích công chức, viên chức nâng cao trình độ chuyên môn, nghiệp vụ. + Việc bổ nhiệm vào ngạch công chức hoặc chức danh nghề nghiệp viên chức phải gắn với vị trí việc làm và cơ cấu ngạch công chức, chức danh nghề nghiệp viên chức do cơ quan, tổ chức, đơn vị quản lý công chức, viên chức thực hiện. - Xây dựng 3 bảng lương đối với lực lượng vũ trang, gồm: + 1 bảng lương sĩ quan quân đội, sĩ quan, hạ sĩ quan nghiệp vụ công an. + 1 bảng lương quân nhân chuyên nghiệp, chuyên môn kỹ thuật công an; + 1 bảng lương công nhân quốc phòng, công nhân công an.</code> | <code>0.0</code> |
| <code>Chưa đăng ký kết hôn có quyền bắt buộc người ba cấp dưỡng cho con được hay không?</code> | <code>Các nguyên tắc, trình tự xử lý kỷ luật lao động được quy định tại Điều 122 Bộ luật Lao động 2019, cụ thể: (1) Việc xử lý kỷ luật lao động được quy định như sau: - Người sử dụng lao động phải chứng minh được lỗi của người lao động; - Phải có sự tham gia của tổ chức đại diện người lao động tại cơ sở mà người lao động đang bị xử lý kỷ luật là thành viên; - Người lao động phải có mặt và có quyền tự bào chữa, nhờ luật sư hoặc tổ chức đại diện người lao động bào chữa; trường hợp là người chưa đủ 15 tuổi thì phải có sự tham gia của người đại diện theo pháp luật; - Việc xử lý kỷ luật lao động phải được ghi thành biên bản. (2) Không được áp dụng nhiều hình thức xử lý kỷ luật lao động đối với một hành vi vi phạm kỷ luật lao động. (3) Khi một người lao động đồng thời có nhiều hành vi vi phạm kỷ luật lao động thì chỉ áp dụng hình thức kỷ luật cao nhất tương ứng với hành vi vi phạm nặng nhất. (4) Không được xử lý kỷ luật lao động đối với người lao động đang trong thời gian sau đây: - Nghỉ ốm đau, điều dưỡng; nghỉ việc được sự đồng ý của người sử dụng lao động; - Đang bị tạm giữ, tạm giam; - Đang chờ kết quả của cơ quan có thẩm quyền điều tra xác minh và kết luận đối với hành vi vi phạm được quy định tại khoản 1 và khoản 2 Điều 125 Bộ luật Lao động 2019 - Người lao động nữ mang thai; người lao động nghỉ thai sản, nuôi con dưới 12 tháng tuổi. (5) Không xử lý kỷ luật lao động đối với người lao động vi phạm kỷ luật lao động trong khi mắc bệnh tâm thần hoặc một bệnh khác làm mất khả năng nhận thức hoặc khả năng điều khiển hành vi của mình. Trân trọng!</code> | <code>0.0</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `num_train_epochs`: 5
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss | spearman_max |
|:------:|:-----:|:-------------:|:------------:|
| 0.2045 | 500 | 0.0352 | - |
| 0.4090 | 1000 | 0.0208 | 0.8642 |
| 0.6135 | 1500 | 0.0181 | - |
| 0.8180 | 2000 | 0.0171 | 0.8645 |
| 1.0 | 2445 | - | 0.8645 |
| 1.0225 | 2500 | 0.0148 | - |
| 1.2270 | 3000 | 0.012 | 0.8648 |
| 1.4315 | 3500 | 0.0105 | - |
| 1.6360 | 4000 | 0.0093 | 0.8646 |
| 1.8405 | 4500 | 0.0092 | - |
| 2.0 | 4890 | - | 0.8646 |
| 2.0450 | 5000 | 0.0082 | 0.8647 |
| 2.2495 | 5500 | 0.0069 | - |
| 2.4540 | 6000 | 0.0062 | 0.8644 |
| 2.6585 | 6500 | 0.0058 | - |
| 2.8630 | 7000 | 0.0056 | 0.8646 |
| 3.0 | 7335 | - | 0.8644 |
| 3.0675 | 7500 | 0.0048 | - |
| 3.2720 | 8000 | 0.0046 | 0.8646 |
| 3.4765 | 8500 | 0.0039 | - |
| 3.6810 | 9000 | 0.0039 | 0.8645 |
| 3.8855 | 9500 | 0.0034 | - |
| 4.0 | 9780 | - | 0.8643 |
| 4.0900 | 10000 | 0.0033 | 0.8644 |
| 4.2945 | 10500 | 0.003 | - |
| 4.4990 | 11000 | 0.0028 | 0.8644 |
| 4.7035 | 11500 | 0.0029 | - |
| 4.9080 | 12000 | 0.0025 | 0.8645 |
| 5.0 | 12225 | - | 0.8645 |
### Framework Versions
- Python: 3.10.13
- Sentence Transformers: 3.0.0
- Transformers: 4.39.3
- PyTorch: 2.1.2
- Accelerate: 0.29.3
- Datasets: 2.18.0
- Tokenizers: 0.15.2
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "sentence-transformers/paraphrase-multilingual-mpnet-base-v2", "language": [], "library_name": "sentence-transformers", "metrics": ["pearson_cosine", "spearman_cosine", "pearson_manhattan", "spearman_manhattan", "pearson_euclidean", "spearman_euclidean", "pearson_dot", "spearman_dot", "pearson_max", "spearman_max"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "dataset_size:10K<n<100K", "loss:CosineSimilarityLoss"], "widget": [{"source_sentence": "Có những loại chứng thực nào?", "sentences": ["Chứng cứ trong vụ án hình sự được hiểu như thế nào?", "Thời điểm hợp đồng cầm cố tài sản có hiệu lực là khi nào?", "Viên chức chuyên ngành công tác xã hội có nhiệm vụ gì?"]}, {"source_sentence": "Premic vitamin là hỗn hợp gì?", "sentences": ["04 mức bồi dưỡng bằng hiện vật cho người lao động năm 2024?", "Thủ tục đăng ký khai tử cho người Việt Nam ở trong nước", "Trường hợp nào được mua pháo hoa của Bộ Quốc phòng về sử dụng?"]}, {"source_sentence": "Khi nào thẻ CCCD sẽ được cấp lại?", "sentences": ["Sai sót về thông tin trên thẻ CCCD được cấp đổi hay cấp lại?", "1. Dùng bảo hiểm xe máy online có vi phạm pháp luật không?", "Bản chất của chủ nghĩa xã hội ở Việt Nam là gì?"]}, {"source_sentence": "Mẫu đơn cam kết cho ở nhờ 2024?", "sentences": ["Cách đăng ký tạm trú online tại nhà dễ thực hiện nhất 2023?", "Hồ sơ thiết kế xe ô tô cải tạo do cơ quan nào thẩm định?", "Thủ tục thu hồi giấy phép kinh doanh vận tải bằng xe ô tô như thế nào?"]}, {"source_sentence": "Nhập hộ khẩu cho con ở đâu?", "sentences": ["Xác định quê quán đối với trẻ em bị bỏ rơi như thế nào?", "Kéo đường dây điện qua đất của người khác có được không?", "Đặc quyền của xe biển số đỏ là gì?"]}], "model-index": [{"name": "SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2", "results": [{"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "Unknown", "type": "unknown"}, "metrics": [{"type": "pearson_cosine", "value": 0.9803795255081482, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.8645230973533456, "name": "Spearman Cosine"}, {"type": "pearson_manhattan", "value": 0.9726972909054996, "name": "Pearson Manhattan"}, {"type": "spearman_manhattan", "value": 0.8644864331598612, "name": "Spearman Manhattan"}, {"type": "pearson_euclidean", "value": 0.9728920367615165, "name": "Pearson Euclidean"}, {"type": "spearman_euclidean", "value": 0.8645176892999815, "name": "Spearman Euclidean"}, {"type": "pearson_dot", "value": 0.9583904271234501, "name": "Pearson Dot"}, {"type": "spearman_dot", "value": 0.861105909034599, "name": "Spearman Dot"}, {"type": "pearson_max", "value": 0.9803795255081482, "name": "Pearson Max"}, {"type": "spearman_max", "value": 0.8645230973533456, "name": "Spearman Max"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION",
"SEMANTIC_SIMILARITY"
] | 42,244 |
Shunian/yelp_review_classification
|
Shunian
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:yelp_review_full",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-11-05T06:38:54Z |
2022-11-05T07:21:17+00:00
| 18 | 0 |
---
datasets:
- yelp_review_full
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: yelp_review_classification
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: yelp_review_full
type: yelp_review_full
config: yelp_review_full
split: train
args: yelp_review_full
metrics:
- type: accuracy
value: 0.6852
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yelp_review_classification
This model was trained from scratch on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8517
- Accuracy: 0.6852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:------:|:--------:|:---------------:|
| 0.7149 | 1.0 | 40625 | 0.6889 | 0.7167 |
| 0.6501 | 2.0 | 81250 | 0.6967 | 0.6979 |
| 0.5547 | 3.0 | 121875 | 0.6915 | 0.7377 |
| 0.5375 | 4.0 | 162500 | 0.6895 | 0.7611 |
| 0.4386 | 5.0 | 203125 | 0.8517 | 0.6852 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu102
- Datasets 2.5.2
- Tokenizers 0.12.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yelp_review_classification
This model was trained from scratch on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8517
- Accuracy: 0.6852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:------:|:--------:|:---------------:|
| 0.7149 | 1.0 | 40625 | 0.6889 | 0.7167 |
| 0.6501 | 2.0 | 81250 | 0.6967 | 0.6979 |
| 0.5547 | 3.0 | 121875 | 0.6915 | 0.7377 |
| 0.5375 | 4.0 | 162500 | 0.6895 | 0.7611 |
| 0.4386 | 5.0 | 203125 | 0.8517 | 0.6852 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu102
- Datasets 2.5.2
- Tokenizers 0.12.1
|
{"datasets": ["yelp_review_full"], "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "yelp_review_classification", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "yelp_review_full", "type": "yelp_review_full", "config": "yelp_review_full", "split": "train", "args": "yelp_review_full"}, "metrics": [{"type": "accuracy", "value": 0.6852, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,245 |
yosefw/bert-amharic-embed-small-v5
|
yosefw
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:62833",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:rasyosef/bert-small-amharic",
"base_model:finetune:rasyosef/bert-small-amharic",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2025-03-11T03:53:12Z |
2025-03-11T03:53:16+00:00
| 4 | 0 |
---
base_model: rasyosef/bert-small-amharic
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:62833
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: በናይጀሪያ 11ሚሊየን ህዝብ የከፋ የምግብ እጥረት እንደሚያጋጥማቸው ድርጅቱ አስጠነቀቀ
sentences:
- በናይጀሪያ 11 ሚሊየን ህዝብ እጅግ የከፋ የምግብ እጥረት እንደሚያጋጥማቸዉ የዓለም የምግብ እና የእርሻ ድርጅት አስጠነቀቀ
፡፡ድርጅቱ የምርት ወቅት በሆነዉ ከሰኔ እስከ ነሐሴ ባሉት ጊዜያት በሰሜናዊ ናይጀሪያ የሚገኙ አካባቢዎች ለዚህ ችግር እንደሚጋለጡ
ይጠበቃል ነው ያለው ።በዚህ ክፉኛ ሊጠቁ እንደሚችሉ ከሚገመቱት ቦታዎችም ቦኮ ሃራም የተመሠረተባት ቦርኖ 65 በመቶ የሚሆነዉ
ርሃብ የሚያሰጋዉ ዜጋ የሚገኝባት ግዛት ናት ።ቦኮ ሃራም በፈጠረዉ ያለመረጋጋት ምክንያት 120 ሺ የሀገሪቱ ዜጎች አደገኛ ለሆነ
የረሃብ አደጋ እንዲጋለጡ ማድረጉን ጨምሮ አመልክቷል ።የተባበሩት መንግስታት ድርጅቶች በበኩሉ እንደገለጸው ፤በዚህ ስፍራ ልጆች
እየሞቱ ስለሆነ አፋጣኝ እርዳታ ካላገኙም በሚሊየን የሚገመቱት ይኸዉ ዕጣ ያሰጋቸዋል።ሙስና እና በመንግሥት እና በእርዳታ ድርጅቶች
መካከል ያለዉ ዉዝግብም ችግሩን እያወሳሰበዉ መሆኑም ተገልጿል።ባለስልጣናት የአካባቢዉ መንግሥታት የእርዳታ እህሉን ይሰርቃሉ
የሚለዉን ክስ እያጣሩ መሆኑን አሶሼየትድ ፕረስ ዘግቧል።የናይጀሪያ መንግሥት ለእርሻ የሚያደርገዉን ድጋፍ ከፍ ማደርጉን ቢገልጽም፤
ሀገሪቷ የምግብ እጥረት ያለባት መሆኗን ነው የተመለከተው ።
- የተባበሩት መንግሥታት ድርጅት ዋና ጸሐፊ አንቶንዮ ጉተሬዥ ትላንት በኒው ዮርክ ከኢትዮጵያ ፌዴራላዊ ዲሞክራሲያዊ ሪፖብሊክ ውጭ
ጉዳይ ሚኒስትር ዶ/ር ወርቅነህ ገበየሁ ጋር ተገናኝተው መነጋገራቸውን የደርጅቱ ቃል አቀባይ አስታወቁ።ዋና ጸሐፊ ጉተሬዥ እና
ውጭ ጉዳይ ሚኒስትር ወርቅነህ ገበየሁ በዚሁ በውይይታቸው በአሁኑ ወቅት በአፍሪካ ቀንድ እየተካሄዱ ስላሉ አዎንታዊ ለውጦችና
በአካባቢው የቀሩትን ፈታኝ ችግሮች ማስወገድ በሚቻሉባቸው መንገዶች ላይ ሃሳብ ተለዋውጠዋል። በቅርቡ በኢትዮጵያና ኤርትራ መካከል
የሰላም ሥምምነት መፈረሙን በደስታ እንደተቀበሉት ገልፀው ለአካባቢው ሰላምና መረጋጋት ጠቃሚ ነው ብለዋል።የተባበሩት መንግሥታት
ድርጅት ዋና ጸሐፊ አንቶኒዮ ጉተሬዥ የዓለሙ ድርጅት ለእነዚህ በጎ ጥረቶችና ኢትዮጵያ በጂቡቲና ኤርትራ መካከል የሰላም ንግግር
እንዲጀመር በማመቻቸቷ ሙሉ ድጋፍ እንደሚስጥ በድጋሚ አረጋግጠዋል።
- 'የኢትዮጵያ ከፍተኛ ሊግ ምድብ ለ የሁለተኛ የውድድር ዘመን አጋማሽ ጨዋታዎች ቅዳሜ ተደርገው ሀላባ ከተማ መሪነቱን ሲያጠናክር
ደቡብ ፖሊስ እና ጅማ አባ ቡና ደረጃቸውን ያሻሻሉበትን ድል አስመዝግበዋል።ጅማ አባ ቡና 3-0 ቤንች ማጂ ቡና(በቴዎድሮስ ታደሰ)ጅማ
ስታድየም ላይ ቤንች ማጂ ቡናን ያስተናገደው ጅማ አባቡናን 3-0 በማሸነፍ ደረጃውን አሻሽሏል። ጨዋታው ሳቢ ያልነበረ ሲሆን
ቤንች ማጂዎች ሙሉ ለሙሉ መከላከል እና ጉልበት ላይ ያመዘነ ጨዋታን ተከትለው ለመጫወት ተገደዋል። አባቡናዎች ምንም እንኳን
የመጀመርያውን አጋማሽ በተመስገን ደረሰ 34ኛው ደቂቃ ባስቆጠራት ጎል በመምራት ቢያጠናቅቁም በእንቅስቃሴ ረገድ ባልተሳኩ ቅብብሎች
እና ያልተደራጀ የጨዋታ እንቅስቃሴ እንዲሁም የቤንች ማጂን የተከላካይ መስመር ማለፍ ሲቸገሩ ተመልክተናል።
ከእረፍት መልስ በይበልጥ በሽኩቻዎች ታጅቦ ቤንች ማጂ ተጫዋቾች እያንዳንዱን የዳኛ ውሳኔ ሲቃወሙ እና አላስፈላጊ ንትርክ ውስጥ
ሲገቡ ተስተውለዋል። በ50ኛው ደቂቃ ብዙዓየሁ እንደሻው አባቡናን መሪነት ወደ 2-0 መሪነት ከፍ ያደረገች ግብ ካስቆጠረ በኋላ
ከመጀመርያው አጋማሽ አንፃር አባቡናዎች ወደግብ ቶሎ ቶሎ በመድረስና የኳስ ቁጥጥር ከመጀመርያው አጋማሽ በተሻለ ሁኔታ መንቀሳቀስ
ችለዋል። በ68ኛው ደቂቃ ብዙአየሁ እንደሻው ፍፁም ቅጣት ምት ክልል ውስጥ መጠለፉን ተከትሎ የመሀል ዳኛው የሰጡትን ፍፁም ቅጣት
ምት በመቃወም ከዳኛው ጋር ግብግብ የገጠሙት የቤንች ማጂ ተጫዋቾች ጌታሁን ገላዬ እና አበራ አየለ ከሜዳ በቀይ ካርድ እንዲወጡ
ተደርጓል። በሁኔታው ጨዋታው ለ10 ደቂቃዎች ተቋርጦ ከቀጠለ በኋላ የተሰጠውን ፍፁም ቅጣት ምት ብዙአየሁ አስቆጥሮ አባቡናን
3-0 በሆነ ውጤት ማሸነፍ ያስቻለውን ውጤት አስመዝግቧል።ከጨዋታው መጠናቀቅ በኋላ ቤንች ማጂ የቡድን አባላት ሜዳ በመግባት
የእለቱ አልቢትሮች ላይ ጥቃት ለመፈፀም ቢሞክሩም በእለቱ በነበሩበት የፀጥታ አካላት ርብርብ አርቢትሮች ላይ ጉዳት ሳይደርስ
የቀረ ሲሆን የግብ ጠባቂው አሰልጣኝ ታፈሰ አጃ ቀይ ካርድ ተመልክቷል። ከሁሉም በላይ አስገራሚው ግብ ጠባቂው አብዱልሃፊዝ መኪ
ከዳኛው አልፍ ተርፎ ፀጥታ በማስከበርና ለዳኞች ከለላ ለሰጠው የኮማንድ ፖስት አባል የሆነው የፌዴራል ፖሊስ ላይ ጥቃት በመሰንዘሩ
ነገሮች አቅጣጫዎችን በመቀየር ጨዋታው ከተጠናቀቀ በኃላ ለረጅም ደቂቃዎች በፀጥታ አካላት እና በእለቱ የጨዋታ ኮሚሽነር ዩሀንስ
ስለሺ አሸማጋይነት ከቆዩ በኃላ በኮሚሽነሩ ጥረትና የፀጥታ አስከባሪ አካላት ሁኔታውን በማብረዳቸው በህግ ቁጥጥር ስር ሳይውል
ቀርቷል፡፡
ሌሎች ጨዋታዎች(በአምሀ ተስፋዬ)ሀላባ ላይ በ10:00 ድሬዳዋ ፖሊስን ያስተናገደው ሀላባ ከተማ በ23ኛው ደቂቃ ስንታየሁ መንግስቱ
በፍፁም ቅጣት ምት ባስቆጠረው ጎል 1-0 አሸንፏል። በዚህም ውጤት መሰረት ሀላባ ከተማ በ31 ነጥቦች የምድብ ለ መሪነትን ማጠናከር
ችሏል።
ደቡብ ፖሊስ በሜዳው በግብ መንበሽበሹን ቀጥሎ ከመቂ ከተማ ጋር ባደረገው ጨዋታ 4-0 በሆነ ውጤት ማሸነፍ ችሏል። በ10ኛው
ደቂቃ ብሩክ ኤልያስ ቀዳሚውን ጎል ሲያስቆጥር በ18ኛው ደቂቃ አበባየው ዮሐንስ ሁለተኛውን ግብ አስቆጥሮ ወደ ዕረፍት አምርተዋል።
በ53ኛው ደቂቃ ኤሪክ ሙራንዳ በፍፁም ቅጣት ምት 3ኛውን ሲያክል በ82ኛው ደቂቃ አራተኛ ግብ አስቆጥረዋል። በዚህ ውጤት መሰረት
ደቡብ ፖሊስ ደረጃውን ወደ ሁለተኛ ከፍ ማድረግ ችሏል።
ቦንጋ ላይ ካፋ ቡና ሻሸመኔ ከተማን አስተናግዶ በሀቁ ምንይሁን ገዛኸኝ የ5ኛው ደቂቃ ብቸኛ ግብ 1-0 ማሸነፍ ችሎል። ነገሌ
ላይ ነገሌ ከተማ ቡታጅራ ከተማን 1-0 ሲያሸንፍ ድሬዳዋ ላይ ረፋድ 04:00 ናሽናል ሴሜንት በሳሙኤል ዘሪሁን ጎሎች ወልቂጤ
ከተማን 2-1 መርታት ችሏል። በደረጃ ሰንጠረዡ ሁለተኛ ላይ የነበረው ዲላ ከተማ ደግሞ ወደ ዱራሜ ተጎዞ ከሀምበሪቾ ጋር ያለጎል
አቻ በመለያየት ከደረጃው ለመንሸራተት ተገዷል።'
- source_sentence: በኮሮና ቫይረስ ሳቢያ በአፍሪካ እስከ 190 ሺህ ሰዎች በመጀመሪያው ዓመት ህይዎታቸው ሊያልፍ ይችላል
– የዓለም ጤና ድርጅት
sentences:
- አዲስ አበባ ፣ ሚያዚያ 30 ፣ 2012 (ኤፍ ቢ ሲ) በአፍሪካ ከ83 ሺህ እስከ 190 ሺህ ሰዎች በኮሮና ቫይረስ ሳቢያ በመጀመሪያው
ዓመት ብቻ ህይዎታቸው ሊያልፍ እንደሚችል የዓለም ጤና ድርጅት አስጠነቀቀ።ድርጅቱ ትናንት ባወጣው መግለጫ በአፍሪካ ሃገራት ቫይረሱን
ለመከላከል የወጡ መመሪያና ህጎች ተግባራዊ እየተደረጉ አለመሆኑን እና እየተወሰዱ ያሉ እርምጃዎችም በቂ አለመሆናቸውን አስታውቋል።ቫይረሱ
ወደ አህጉሪቱ ዘግይቶ ቢገባም ለበርካታ ጊዜ የሚቆይ ወረርሽኝ ሊሆን ይችላልም ነው ያለው።በተጨማሪም ቫይረሱን በፍጥነት በቁጥጥር
ስር ማዋል ካልተቻለ ከ29 እስከ 44 ሚሊየን ሰዎችን ሊይዝ እንደሚችልም ነው ያስጠነቀቀው።የአሁኑ የድርጅቱ መረጃ በ47 የአፍሪካ
ሀገራት የሰራውን ጥናት መሰረት አድርጎ የወጣ ነው ተብሏል።ከሃገራቱ መካከል ደቡብ አፍሪካ፣ አልጄሪያ እና ካሜሩን በቫይረሱ
ክፉኛ ሊጠቁ የሚችሉ ሃገራት ይሆናሉ በሚልም ስጋቱን አስቀምጧል።በጥናቱ ግብጽ፣ ሊቢያ፣ ቱኒዚያ፣ ሞሮኮ፣ ኤርትራ፣ ሱዳን፣ ሶማሊያ
እና ጂቡቲ አልተካተቱም።በአፍሪካ በኮሮና ቫይረስ ሳቢያ በ47 ሃገራት ከ35 ሺህ በላይ ሰዎች ሲያዙ፥ ከ1 ሺህ 200 በላይ
ደግሞ ሕይወታቸው ማለፉን ድርጅቱ ገልጿል።ምንጭ፦ ቢቢሲየዜና ሰዓት ሳይጠብቁ የፋና ብሮድካስቲንግ ኮርፖሬትን ትኩስ እና ሰበር
ዜናዎችን በፍጥነት በአጭር የፅሁፍ መልዕክት መልክ በስልክዎ ላይ እንዲደርስዎ ወደ 8111 OK ብለው ይላኩ።
- በወልቂጤ ከተማ ከሳምንታት በፊት ውሉን ለማራዘም ተስማምቶ የነበረው የመስመር አጥቂው ጫላ ተሺታ አሁን ደግሞ ለቀድሞ ክለቡ
ሲዳማ ቡና ለመፈረም ተስማማ፡፡የቀድሞው የሻሸመኔ እና ሰበታ ከተማ ተጫዋች ለኢትዮጵያ ከ17 ዓመት በታች ብሔራዊ ቡድን ተሰልፎ
ከተጫወተ በኃላ ነበር 2010 ላይ ወደ ሲዳማ ቡና ተቀላቅሎ መጫወት የጀመረው። ተጫዋቹ በውሰት ለአዳማ ከተማም በመሄድ የተጫወተ
ሲሆን በ2011 በሲዳማ ቤት ባሳየው መልካም እንቅስቃሴ መነሻነት ዘንድሮ ወደ ወልቂጤ ሄዶ ከክለቡ ጋር መልካም የውድድር ዓመትን
ሊጉ እስከተቋረጠበት ጊዜ ድረስ ያሳየ ሲሆን ውሉ በመጠናቀቁ ከሳምንታት በፊት በወልቂጤ ለመቀጠል ቢስማማም በድጋሚ ለቀድሞው
ክለቡ ሲዳማ የሁለት ዓመት ውል ለመፈረም ተስማምቷል፡፡
- ማክሰኞ መጋቢት 15 ቀን 2012 ዓ.ም. ማለዳ ከማላዊ ወደ ሞዛምቢክ ኢትዮጵያዊያንን ጭኖ ሲጓዝ በነበረ የጭነት ተሽከርካሪ
ኮንቴይነር ውስጥ፣ 64 ኢትዮጵያዊያን ሞተው መገኘታቸውን የሞዛምቢክ ባለሥልጣናት አስታወቁ፡፡ ሰነድ አልባ ስደተኞቹ ወደ ደቡብ
አፍሪካ ለመሸጋገር ጉዞ የጀመሩ እንደሆኑ ታውቋል፡፡በስደት ወደ ደቡብ አፍሪካ ለመሄድ የጭነት ተሽከርካሪውን ተሳፍረው የነበሩት
78 ኢትዮጵያውያን እንደነበሩ፣ 14 ኢትዮጵያውያን ግን በሕይወት መገኘታቻው ተሰምቷል፡፡ ለ64 ኢትዮጵያውያን ሕይወት ማለፍ
ምክንያቱ በአየር ማጣት መታፈን ሊሆን እንደሚችል ተገልጿል፡፡ በሞዛምቢክ ሰሜናዊ ምዕራብ ቴቴ በተባለ ሥፍራ በሚገኝ ሙሳካና
በተባለ የክብደት መመዘኛ ጣቢያ ተሽከርካሪው በኢሚግሬሽን ሠራተኞች እንዲቆም ተደርጎ ፍተሻ ሲካሄድ፣ 64 ሟቾች በሕይወት ከተረፉት
ጋር መገኘታቸውን አዣንስ ፍራንስ ፕሬስ የሆስፒታል ምንጭ ጠቅሶ ዘግቧል፡፡የሞዛምቢክ ሰሜን ምዕራብ ግዛት ባለሥልጣናት ከጭነት
ተሽከርካሪው ላይ አስከሬናቸው የተገኙ ኢትዮጵያውያን፣ ሕይወታቸው በምን ምክንያት እንዳለፈ ምርመራ እየተደረገ መሆኑን መግለጻቸው
ታውቋል። በሌላ በኩል ኢትዮጵያውያኑ ሕይወታቸው ያለፈው በኮንቴይነር ውስጥ ታፍነው ሊሆን እንደሚችል በሞዛምቢክ ባለሥልጣናት
የተገለጸ ሲሆን፣ የሞዛምቢክ ብሔራዊ ኢሚግሬሽን አገልግሎት ቃል አቀባይ አሜሊያ ደሪዬሮ ሟቾቹን አሳፍሮ የነበረው የጭነት ተሽከርካሪ
ሾፌር እንዲቆም ሲጠየቅ ፈቃደኛ እንዳልነበረ መናገራቸውን ቢቢሲ ዘግቧል።እንደ ቃል አቀባይዋ ማብራሪያ የኢሚግሬሽን ሠራተኞች
በተሽከርካሪው ውስጥ ድምፅ በመስማታቸው፣ ስደተኞች በውስጡ ሊኖሩ ይችላሉ የሚል ጥርጣሬ አድሮባቸው እንዲቆም ሲያደርጉ ሟቾችን
አግኝተዋል።ወደ ደቡብ አፍሪካ በስደት የሚጓዙ ሰነድ አልባ ስደተኛ ኢትዮጵያዊያን ኬንያን፣ ታንዛኒያን፣ ማላዊና ሞዛምቢክን ለመሸጋገሪያነት
የሚጠቀሙባቸው እንደሆነ፣ የዛምቢያ ሰሜን ምዕራብ ግዛትም ዋናው የሰነድ አልባ ስደተኞች መተላለፊያ መሆኑን የተለያዩ ሚዲያዎች
ዘገባዎች ያመለክታሉ፡፡ በሕይወት የተረፉት 14 ኢትዮጵያዊያን የኮሮና ቫይረስ ምርመራ እንደ ተደረገላቸው ቃል አቀባይዋ መናገራቸውን
ቢቢሲ በዘገባው አካቷል።
- source_sentence: የአውሮፓና አፍሪካ መሪዎች የሊቢያን የባሪያ ንግድ በፍጥነት ለማስቆም ተስማሙ
sentences:
- 'በትናንትናው ዕለት የትግራይ ቴሌቪዝንና ድምፂ ወያነ ቴሌቪዥን ጣብያዎች የሳተላይት ስርጭት መቋረጡ ታውቋል።የሁለቱ መገናኛ
ብዙሃን ጣብያዎች ሥራ አስኪያጆች ጉዳዩን ለቢቢሲ ያረጋገጡ ሲሆን የትግራይ ቴሌቪዥን ጣቢያ ሥራ አስኪያጅ የሆኑት አቶ ተሻለ
በቀለ የቴሌቪዥኑ ጣብያ የሳተላይት ስርጭት እንዲቋረጥ የተደረገው በኢትዮጵያ መንግሥት ነው ሲሉ ለቢቢሲ ተናግረዋል።"መንግሥት
የሳተላይት ስርጭቱ እንዲቋረጥ አድርጓል። ስለተፈጠረው ነገር ለማወቅ ሳተላይቱን ወደ አከራየን ድርጅት ስንደውል የኢትዮጵያ መንግሥትና
የፈረንሳይ መንግሥት ተነጋግረው በሰጡት ትዕዛዝ መሰረት አቋርጠነዋል የሚል ምላሽ ነው የሰጡን፤ ምክንያቱን ስንጠይቅም የፈረንሳይ
መንግሥት ያለውን መፈፀም አለብን ነው ያሉት።" በጉዳዩ ላይ የኢትዮጵያ ብሮድካስት ባለስልጣን ምክትል ሥራ አስኪያጅ አቶ ወንድወሰን
አንዷለም ለቢቢሲ በሰጡት ቃል "እስካሁን ባለኝ መረጃ የኢትዮጵያ ብሮድካስት ባለስልጣን ሚዲያዎችን የመዝጋት እርምጃ አልወሰደም
"ብለዋል።የትግራይ ቴሌቪዥን ሥራ አስኪያጅ ስርጭታቸው እስከ መቼ ድረስ ተቋርጦ እንደሚቀጥል ያውቁ እንደሆን ተጠይቀው ይህንን
ጥያቄ ስርጭቱን ላቋረጠው ድርጅት ማቅረባቸውን ይገልፃሉ።እነርሱም እስከመቼ ድረስ እንደተቋረጠ እንደማያውቁ እና "ከኢትዮጵያ
መንግሥት ጋር ተነጋገሩ" ማለታቸውን ገልፀዋል።ሥራ አስኪያጁ አቶ ተሻለ አክለውም ወደ ኢትዮጵያ ብሮድካስት ባለስልጣን መደወላቸውንና
ስልካቸውን የሚመልስላቸው አለማግኘታቸውን ተናግረዋል።አቶ ተሻለ ለቢቢሲ ጨምረው እንደተናገሩት መቋረጡን የሚገልጽ ደብዳቤ ከድርጅቱ
አለመጻፉን ለቢቢሲ አረጋግጠዋል። በትግራይ ክልል የሚገኘው የቢቢሲ ሪፖርተር የሁለቱ ቴሌቪዥን ጣቢያዎች ስርጭት እየተላለፈ
አለመሆኑን አረጋግጧል።አቶ አበበ አስገዶም፣ የድምፂ ወያነ ቴሌቪዥን ጣብያ ሥራ አስኪያጅ ለሪፖርተራችን እንዳረጋገጡት፣ በበኩላቸው
ስርጭታቸው እንደተቋረጠ ፈረንሳይ አገር ወደ ሚገኘው የሳተላይት አገልግሎት አቅራቢ ድርጅት መደወላቸውንና የቴክኒክ ክፍል ኃላፊው
የኢትዮጵያ መንግሥት እንዲቋረጥ ማድረጉን እንደነገራቸው ገልፀዋል።ባለፈው ሳምንት ድምፂ ወያነ በአዲስ አበባ የሚገኘው ቢሮው
አቃቤ ሕግ በሚያደርግበት ምርመራ የተነሳ ፍተሻ እንደተደረገለት መዘገቡ ይታወሳል።የብሮድካስት ባለስልጣን ምክትል ሥራ አስኪያጅ
አቶ ወንድወሰን እነዚህ ሁለት ድርጅቶች የእርምት እርምጃ እንዲወስዱ ጥብቅ ማስጠንቀቂያ መጻፋቸውን አስታውሰዋል።ቢሆንም ግን
የጣቢያዎቹ ስርጭት እንዲቋረጥ ባለስልጣኑ የወሰደው ምንም አይነት እርምጃ እንደሌለ ለቢቢሲ ገልጸዋል። '
- የአውሮፓ እና አፍሪካ መሪወች የሊቢያ የባሪያ ንግድን አስመልክቶ በአይቪሪኮስት ባካሄዱት ስብሰባ ስደተኞችን ከሊቢያ በፍጥነት
ለማስቆም በሚያስችላቸው መንገድ ላይ ከስምምነት ላይ ደረሱ ፡፡የአውሮፓ ህብረት ፣የአፍሪካ ህብረት እና የተባበሩት መንግስታት
ድርጅት የሚታወቁ ህገወጥ የሰዎች አዘዋዋሪዎች ንብረት እንዳይንቀሳቀስ እንዲሁም ማንኛውም የፋይናንስ ምንጫቸው ሙሉ በሙሉ እንዲቋረጥ
ውሳኔ አሳልፈዋል፡፡በዚህ ዘመን የባሪያ ንግድ በሊቢያ ይካሄዳል ሲባል የሰማ የዓለም ህዝብ ቁጣውን ገልጿል፡፡ ለጉዳዩ ፈጣን
ምላሽ በመሥጠት ችግሩን ለመቅረፍ የአውሮፓ እና አፍሪካ ሀገራት መሪዎች በ ኮቲዲቫር ባደረጉት ስብሰባ ስደተኞቹ ከሊቢያ ባፋጣኝ
እንዲወጡ ይደረጋል ብለዋል፡፡በዚህም ስምምነት የአውሮፓ ህብረት ፣የአፍሪካ ህብረት እና የተባበሩት መንግስታት ድርጅት በጋራ
የአስቸኳይ እቅድ በማውጣት የህገወጥ አዘዋዋሪዎችን መረብ በመበጣጠስ አደገኛ የሰብአዊ ቀውስ ላይ ያሉ በሊቢያ የሚገኙ ስደተኞችን
የመታደግ ተልዕኮ አንግበዋል ተብሏል፡፡የአውሮፓዊያን ህብረት ምክር ቤት ፕሬዝዳንት ዶናልድ ቱስክ የባሪያ ንግዱ ሪፖርት በስብሰባዎች
ወቅት ያሰማን መረጃዎች የሚያስደነግጡ ነበሩ በማለት ለችግሩ እልባት ለማምጣት የአፍሪካ እና አውሮፓ መንግስታት ንቁ በሆነ መልኩ
በጋራ መስራት ይጠበቅባቸዋል ሲሉ መልክታቸውን አስተላልፈዋል፡፡የፈረንሳዩ ፕሬዝዳንት ኢማኑኤል ማክሮን ስደተኞችን ለመታደግ በሚቋቋመው
ግብረ ኃይል የአፍሪካ እና አውሮፓ ፖሊሶች ተሳታፊ ይሆናሉ ብለዋል፡፡ፕሬዝዳንት ኢማኑኤል ማክሮን ስደተኞችን በፍጥነት ለማውጣት
በሚደረገው ተልዕኮ በአጭር ጊዜ ውስጥ ህገወጥ አዘዋዋሪዎች በቁጥጥር ሥር ለማዋል፣ የባሪያ ሽያጭ የሚካሄድበት ድብቅ ትስስራቸውን
ማቋረጥ እንዲሁም በተለዩ ባንኮች የሚያደርጉትን የባንክ ገንዘብ ዝውውር ማስቀረት ቅድሚያ ከሚሰሩ ጉዳዮች መካከል መሆናቸውን
አብራርተዋል፡፡በአምስተኛው የአውሮፓ ህበረት እና አፍሪካ ህበረት የአቢጃን ስብሰባ ላይ የቀረበው እቅድ በዋናነት ቻድ ኒጀር
ሞሮኮ ኮንጎ እና ሊቢያ ያረቀቁት ሲሆን የፈረንሳይም እጅ አለበት፡፡አውሮፓ ለመግባት በሚያደርጉት ጥረት ሊቢያ ላይ ለባሪያ
ንግዱ እየተጋለጡ ያሉትም አብዛኛው ከምዕራብ አፍሪካ ሀገራት የሚነሱ ስደተኞች ቢሆኑም መነሻቸውን ከሌሎች የአፍሪካ ቀጠናዎች
ያደረጉም ቁጥራቸው በቀላል የሚገመት እንዳልሆነ ዘገባዎች ያመላክታሉ፡፡እስካሁን ስደተኞችን ለመመለስ በተሠራው የሀገራቱ ሥራ
ናይጄሪያ ከ240 በላይ ዜጎቿን ከሊቢያ ስታስወጣ ጋና 100 አይቬሪኮስት ደግሞ ከ155 በላይ ዜጎቿን ከሊቢያው ከባድ ሰብአዊ
ቀውስ ታድገዋል ያለው ሮይተርስ ነው፡፡
- "ሃና ጋዜጠኛ ናት። የቴሌቪዥን ሪፖርተር ሆና ሠርታለች። ነገር ግን ከአንድ ዓመት ከግማሽ በፊት የራሷን መንገድ መከተል መረጠች።\
\ ከቴሌቪዥን ሪፖርተርነት ራሷን ካገለለች ወዲህ በኅብረተሰብ አገልግሎት ሥራ ላይ ተሰማርታ ትገኛለች። \n\nከቅርብ ጊዜ ወዲህ\
\ ደግሞ ጥቁር አሜሪካውያን ላይ የሚደርሰውን ጭቆና በመቃወም የመብት ተሟጋችነት መድረክን ተቀላቅላለች። ከዚያም አልፎ ለእናትና\
\ ለአባቷ አገራት ኢትዮጵያና ኤርትራ የተለያዩ የእርዳታ ሥራዎችን እንደምትሰራ ትናገራለች። በአሁኑ ወቅት በዩትዩብና ፌስቡክ\
\ ገፆቿ ሥራዎቿን ታስተዋውቃለች።\n\nሃና ከጥቂት ሳምንታት በፊት በኮሮናቫይረስ የቅርብ ዘመዷን እንዳጣች በትዊተር ገጿ ላይ\
\ አስፍራ ነበር። ከዚህም አልፎ እሷም ተመርምራ ኮቪድ-19 እንዳለባት ማወቋን በይፋ በመናገር ሌሎች በእሷ ዕድሜ ያሉ ወጣቶች\
\ ራሳቸውን እንዲጠብቁ በተደጋጋሚ መክራለች። \n\nለመሆኑ ሃና ኮሮናቫይረስ እንዴት ሊያገኛት ቻለ? \n\nየበሽታውምልክት\n\
\nየበሽታው ምልክት የጀመረኝ ሰኔ 23 [ጁን 30] ገደማ ነው። ከዚያ በፊት ባሉት ሳምንታት የተለያዩ የተቃውሞ ሰልፎች ላይ\
\ ስሳተፍ ነበር። ነገር ግን በዚህ ወቅት ምንም ዓይነት የቫይረሱ ምልክት አልታየብኝም። ሁሌም የአፍና የአፍንጫ መሸፈኛ ጭምብል\
\ እጠቀም ነበር። ሳኒታይዘርም ይዤ ነበር የምንቀሳቀሰው። \n\nምልክቱ መጀመሪያ ሲጀምረኝ መካከለኛ የሚባል ነበር። ከዚያ\
\ ግን ወዲያው በጣም እየከፋ መጣ። የመጀመሪያው ምልክት ራስ ምታት ነበር። በጣም ከባድ ራስ ምታት። ከዚያ ያቅለሸልሸኝ ጀመር።\
\ ጥርሴን ለመፋቅ ብሩሽ ስጠቀም ይሁን አሊያም ምግብ ልመገብ ስል ወደላይ ይለኛል ግን አያስመልሰኝም። ይህ ሁሉ ስሜት የተሰማኝ\
\ በአንድ ሌሊት ነው። \n\nከዚያ በኋላ ባሉት ቀናት የተለያዩ ስሜቶችን አስተናግጃለሁ። ሌላኛው ስሜት ደግሞ መፍዘዝ ነበር፤\
\ በጣም ይደክመኝም ነበር። የምግብ ፍላጎቴም እጅጉን ቀንሶ ነበር። አስታውሳለሁ ቁጭ ብዬ ምግብ እየላሁ ከትንሽ ጉርሻ በኋላ\
\ የምግቡ ጣዕም እየጠፋብኝ መጣ።\n\nከእነዚህ ሁሉ ምልክቶች ግን የከፋ የነበረው ድንገት ሌሊት ላይ የጀመረኝ ምልክት ነው።\
\ ይህም የትንፋሽ ማጠር ነው። በጣም ያስደነገጠኝ እሱ ነበር። \n\nይህን ጊዜ ነው ወደ ሕክምና ተቋም ሄጄ የተመረመርኩት።\
\ \n\nከዚያ በፊት ግን የተለያዩ ተቃውሞዎች ላይ ተሳትፌ ስለነበር ምልክቱ ባይኖረኝም ጥቂት ጊዜያት ተመርምሬ ነበር። በዚህኛው\
\ ዙር ግን ከተመርመርኩ በኋላ ራሴን አግልዬ ተቀመጥኩ። ይህንን ያደረግኩት ከሰኔ 24 ጀምሮ ነው። \n\nውጤትጥበቃ\n\n\
ከተመረመርኩ በኋላ ያለው ጊዜ በጣም አስጨናቂ ነበር። በተለይ ውጤቱ መጠበቅ እጅግ ግራ አጋቢ ስሜት ነበረው። ምንም እንኳ በሽታው\
\ እንደሚኖርብኝ ብጠረጥርም፤ ቢኖርብኝስ የሚል ጥያቄ ሃሳብ ደጋግሞ ይመጣብኛል። ከየት ይሆን ያገኘሁት? ለሰው አስተላልፌውስ\
\ ቢሆን? አለብሽ ከተባልኩ ምን እሆናለሁ? ብቻ የተለያዩ ሃሳቦች ወደ አእምሮዬ ይመጡ ነበር። \n\nየበሽታው ምልክት ሳይታይብኝ\
\ የተመረመርኩ ጊዜ ውጤት የመጣልኝ ወዲያው ነበር። ነገር ግን በሁለተኛው ዙር ይመስለኛል ከአምስት ቀናት በኋላ ነው ውጤቴን\
\ የሰማሁት። \n\nበጆርጂያ ግዛት ቫይረሱ የሚገኝባቸው ሰዎች ቁጥር እየጨመረ በመምጣቱ ውጤት ይፋ የሚሆንበት ቀንም እየጨመረ\
\ መጥቶ ነበር። \n\nበስተመጨረሻ የተመረመርኩ ጊዜ ውጤቴን በፅሑፍ መልዕክት የነገሩኝ ከዘጠኝ ቀናት በኋላ ነው። ደውለው\
\ ውጤትሽ 'ፖዘቲቭ' ነው እና ራስሽይን አግልይ አሉኝ። እንግዲህ ይሄ የሆነው እኔ ራሴን አግልዬ ከቆየሁ በኋላ ነው።\n\n\
ቫይረሱ ሊይዘኝ ይችላል ብለሽ አስበሽ ታውቂያለሽ? ብባል፤ መልሴ \"አዎ\" ነው። ምክንያቱም በየትኛውም ጊዜ ከቤቴ ከወጣሁ\
\ ሊይዘኝ እንደሚችል አስባለሁ። \n\nእንዲያውም ወረርሽኙ የገባ ሰሞን ለሦስት... "
- source_sentence: የእሁዱ የፋሲል ከነማ ጨዋታ የቴሌቪዥን ሽፋን ያገኛል
sentences:
- አዲስ አበባ ፣ ህዳር 12 ፣2013 (ኤፍ.ቢ.ሲ) በሚድሮክ ግሩፕ የተገነባው ባለ 25 ፎቅ ህንጻ ስራ ጀምረየአዲስ አበባ ምክትል
ከንቲባ ወይዘሮ አዳነች አቤቤ በማህበራዊ የትስስር ገጻቸው ላይ÷በሚድሮክ ግሩፕ በከተማችን ውብ ሆኖ ተገንብቶ ለ13 አመታት
ያለ ስራ ቆሞ የነበረውን ባለ 25 ፎቅ ሚና ህንጻን ዛሬ ስራ በማስጀመራችን ደስ ብሎኛል ብለዋል።በሚና ህንጻ አዲስ የተከፈተውን
የተለያዩ የሃገራችንን ምርቶች በአንድ ቦታ ለሸማቾች እንዲቀርብ እየሰራ ያለውን ኩዊንስ ሱፐር ማርኬትንም መርቀናል ነው ያሉት።ከንቲባዋ
አያይዘውም የእንደዚህ አይነት የዘመናዊ የችርቻሮ አውታር መብዛትና መስፋፋት የነዋሪያችን የገበያ ፍላጎት በማሟላት በአምራችና
ሸማች መሃከል ያለውን የግብይት ሰንሰለት ያሳጥራል ሲሉ ገልጸዋል።ይህም የኑሮ ውድነት እና የዋጋ ንረትን በመቀነስ እንዲሁም
የስራ እድልን በመፍጠር ረገድ ትልቅ ሚና አለውም ነው ያሉት።በጦር ሃይሎች አካባቢ የተከፈተውን ተመሳሳይ የገበያ ማዕከል ከወር
በፊት ጎብኝተናል በሌሎች አካባቢም እንዲከፈቱ ባቀረብነው ጥያቄ መሰረት ተግባራዊ ምላሽ ስለሰጡ የሚድሮክ ግሩፕ አመራሮችን ከልብ
አመሰግናለሁ ብለዋል።አሁንም እንዲህ አይነት የህዝቡን ኑሮ መደጎም የሚችሉ ኢንቨስትመንቶችን እናበረታታለንም ነው ያሉት።
- ግብፅ ወደ ናይል ትብብር ማዕቀፍ ለመቀላቀል ያቀረበችውን ጥያቄ የትብብር ማዕቀፍ አባል ሃገራት የሚኒስትሮች ምክር ቤት ውድቅ
ማድረጉ የውሃ መስኖና ኤሌክትሪክ ሚኒስትር አስታወቀ ።ሚኒስትሩ ዶክተር ኢንጂነር ስለሺ ጌታሁን እንደገለጹትበመጋቢት ወር በዩጋንዳ
ኢንተቤ በተደረገው የሚኒስትሮች ጉባኤ ግብፅ ወደ ትብብር ማዕቀፉ ለመመለስ ያቀረበችው ጥያቄ ውድቅ ሆኗል።የውሃ መስኖና ኤሌክትሪክ
ሚኒስትሩ ዶክተር ኢንጂነር ስለሺ ጌታሁን፤ ግብፅ በ1959 የቅኝ ግዛት ዘመን ስምምነት መሰረት የውሃ አጠቃቀም ተግባራዊ ሊደረግ
ይገባል የሚል ሀሳብ ይዛ መቅረቧ ጥያቄው ውድቅ እንዲሆን እንዳደረገው ነው ያስታወቁት።የግብጽ ሃሳብ ኢትዮጵያ እንደሃገር የማትቀበለውና
የትብብር ማዕቀፉ የቆመላቸውን ምሰሶዎች የሚያፈርስ መሆኑንም ሚኒስትሩ አስገንዝበዋል።ኢትዮጵያን ጨምሮ ከተፋሰሱ ሀብት ያልተጠቀሙ
ሃገራትን ተጠቃሚ በሚያደርግ መልኩ የትብብር ማዕቀፉ ሲዘጋጅ እንደ ሃገር የተደረገው ክርክርም ይህን ለማስቀረት ያለመ እንደነበርም
አንስተዋል።በታላቁ ህዳሴ ግድብ የውሃ አያያዝ፣ አለቃቀቅና ማህበራዊ ተፅዕኖን አስመልክቶ ቀጣይ ጥናት እንዲያካሂዱ የተመረጡት
ሁለቱ የፈረንሳይ ኩባንያዎችም፤ ጥናቱን በምን መልኩ ለማካሄድ እንደተዘጋጁ ለሶስቱ ሃገራት ሪፖርታቸውን አቅርበው ሃገራቱ ምላሻቸውን
ለኩባንያዎቹ ማቅረባቸውን ሚኒስትሩ ገልጸዋል።ኢትዮጵያም በጥናቱ ከግድቡ አሉታዊ ተፅዕኖዎች ባለፈ ያለውን አወንታዊ ተፅዕኖና
ጠቀሜታ በጥናቱ እንዲካተት ሃሳቧን አቅርባለችም ነው ያሉት።ባለፈው ወር በግብፅ፣ ትናንት ደግሞ በአዲስ አበባ ሶስቱ ሃገራት
በጉዳዩ ዙሪያ ውይይት ካደረጉ በኋላ ስምምነት በተደረገባቸው ጉዳዮች ላይ ኩባንያዎቹ ጥናት ማካሄድ ጀምረዋል።የናይል ተፋሰስ
ትብብር ማዕቀፍ ስምምነት የተፈጥሮ ሀብትን በጋራና በፍትሃዊነት ለመጠቀም ኢትዮጵያ፣ ዲሞክራቲክ ሪፐብሊክ ኮንጎ፣ ዩጋንዳ፣ ታንዛኒያ፣
ሩዋንዳ፣ ብሩንዲ፣ ኬኒያ፣ ሱዳንና ደቡብ ሱዳን የተስማሙበት እንደሆነም አብራርተዋል ።ኢትዮጵያን ጨምሮ አምስቱ አባል ሃገራት
ስምምነቱን በምክር ቤቶቻቸው ሲያፅድቁ ሌሎች አባል ሃገራት በሂደት ላይ ይገኛሉ።የትብብር ማዕቀፉን አንቀጾች ባለመቀበል ግብፅ
ከትብብሩ ብትርቅም ባለፉት ዘጠኝ ዓመታት የትብብሩ አንቀጾች እየተቀበለች መምጣቷን የውሃ መስኖና ኤሌክትሪክ ጠቅሰዋል።ባለፉት
ጥቂት ወራት ግብፅ ወደ ናይል ትብብር ማዕቀፍ ለመመለስ ጥያቄ አቅርባም ከአባል ሃገራቱ በተዋቀረው የሚኒስትሮች ምክር ቤት የተመረጡ
የሱዳን፣ ሩዋንዳና ዩጋንዳ ሚኒስትሮች ጉዳዩን ሲመረምሩ ቆይተዋል ብለዋል ።በሁለቱ ውይይቶች ስምምነት ባልተደረሰባቸው ጉዳዮች
ደግሞ በቀጣይ በካርቱም የሶስትዮሹ ውይይት የሚቀጥል ይሆናል።ሚኒስትሩ ከግድቡ ግንባታ ጋር ተያይዞ የታላቁ ህዳሴ ግድብ ግንባታ
አሁን ላይ 58 ነጥብ 4 በመቶ ተጠናቋል-(ኤፍ ቢ ሲ) ።
- በካፍ ኮንፌዴሬሽን ዋንጫ ፋሲል ከነማ ከታንዛንያው አዛም ጋር የሚያደርገው ጨዋታ የቀጥታ የቴሌቪዥን ስርጭት እንደሚያገኝ ታውቋል።ሃዋሳ
ከተማን በመለያ ምት በማሸነፍ ወደ አህጉራዊ ውድድሮች በታሪካቸው ለመጀመሪያ ጊዜ ያመሩት ፋሲል ከነማዎች እሁድ ከታንዛኒያው
ክለብ አዛም ጋር የመጀመሪያ የማጣሪያ ጨዋታቸውን ያደርጋሉ። በባህር ዳር ዓለምአቀፍ ስታዲየም የሚደረገው ይህ ጨዋታም በአማራ
ብዙሃን መገናኛ ድርጅት (አማራ ቲቪ) በቀጥታ እንደሚተላለፍ ታውቋል። ከተቋሙ ባገኘነው መረጃ መሰረት ጨዋታውን ለማስተላለፍ
ስራዎች እየተሰሩ መሆናቸው ሲገለፅ መቀመጫውን ታንዛኒያ ያደረገው አዛም ቲቪም ከቴሌቪዥን ጣቢያው (አማራ ቲቪ) ጋር ተስማምቶ
ጨዋታውን ለማስተላለፍ እንደተወሰነ እና መከፈል ያለበትን ክፍያ ለጣቢያው እንደፈፀመ ተነግሯል።በተያያዘ ዜና ጨዋታውን የሚመሩት
ሱዳናዊው ዳኞች ባህር ዳር መግባታቸው ተረጋግጣል። ሶከር ኢትዮጵያም ጨዋታውን በቀጥታ የፅሁፍ ስርጭት ለስፖርት ቤተሰቡ እንደሚያደርስ
ለማሳወቅ ይወዳል።
- source_sentence: ኤም ቲ ኤን ለአልቃይዳና ታሊባን ጉቦ በመስጠት ተወነጀለ
sentences:
- "ኩባንያው ለእነዚህ ቡድኖች ገንዘብ የሰጠው አፍጋኒስታን ውስጥ የሚገኝና ትልቅ ኢንቨስትመንት ያደረገባቸው የኔትዎርክ ታዎሮች\
\ ላይ ጥቃት እንዳይደርስበት ለጥበቃ ነው ተብሏል።\n\nበውንጀላው መቀመጫቸውን አሜሪካ ያደረጉ ሌሎች አምስት ኩባንያዎችም\
\ ያሉ ሲሆን ክሱ የቀረበው አፍጋኒስታን ውስጥ በተገደሉ የአሜሪካ ዜጎች ስም ነው።\n\n• ጃዋር ኦሮሞ ፌደራሊስት ኮንግረስን\
\ መቀላቀሉ ተረጋገጠ\n\n• ሱዳን በ29 ግለሰቦች ላይ የሞት ፍርድ አስተላለፈች\n\n• \"ሙስና ለመጣው ፖለቲካዊ ለውጥ\
\ አንድ ምክንያት ነው\" \n\nበቀረበው ክስ እንደተባለው ሁለቱ አሸባሪ ቡድኖች ከኤም ቲ ኤን ያገኙትን ገንዘብ እንደ አውሮፓውያኑ\
\ ከ2009 እስከ 2017 አፍጋኒስታን ውስጥ ላካሄዷቸው የጥቃት ዘመቻዎች ተጠቅመውበታል።\n\nይህ ደግሞ የአሜሪካን የፀረ\
\ ሽብር አዋጅን የሚፃረር ነው፤ ስለዚህም ኤም ቲ ኤን ይህን ህግ ተላልፏል ተብሏል።\n\nኩባንያው ግን በየትኛውም ቦታ ስራውን\
\ የሚያካሂደው ሃላፊነት በተሞላበት መንገድ እንደሆነ ገልጿል።\n\nኤም ቲ ኤን በአፍሪካ ግዙፉ፤ በአለም አቀፍ ደረጃ ደግሞ\
\ ከስምንት ትልልቅ ኩባንያዎች አንዱ ነው። ከ240 ሚሊዮን ተጠቃሚዎች በላይም አለው ተብሎ ይታመናል።\n\nበ2015 ያልተመዘገቡ\
\ ሲም ካርዶችን ባለመሰረዝ በናይጄሪያ ባለስልጣናት በቀረበበት ክስ 5 ቢሊዮን ዶላር እንዲቀጣ ተፈርዶበት፤ ከረዥም ክርክር\
\ በኋላ እንዲሁም የያኔው የደቡብ አፍሪካ ፕሬዝዳንት ጃኮም ዙማ በጉዳዩ ጣልቃ ገብተው ቅጣቱ ወደ 1.7 ቢሊዮን ዶላር እንደወረደለት\
\ የሚታወስ ነው።\n\nየዛሬ ዓመትም በኢራን የቀድሞ የደቡብ አፍሪካ አምባሳደር ኤምቲኤን ኢራን ውስጥ እንዲሰራና የ 31.6\
\ ቢሊዮን ዶላር ፕሮጀክት እንዲያሸንፍ ጉቦ ተቀብለዋል በሚል መታሰራቸውም ይታወሳል።\n\n "
- አዲስ አበባ፣ ጥቅምት 5፣ 2013 (ኤፍ.ቢ.ሲ) ፈረንሳይ በሀገሪቱ ዳግም እያገረሸ የመጣውን የኮሮና ቫይረስ ወረርሽኝ ስርጭትን
ለመግታት ፓሪስን ጨምሮ በስምንት ከተሞች የሰዓት እላፊ ልትጥል ነው።
- ባለፉት 20 አመታት ዴሞክራሲን በማስረፅ ረገድ የህዝቦችን ተሳትፎ የቃኘ ጥናት ይፋ በሆነበት ወቅት እንደተገለፀው፤ በርካታ
የሚዲያ ተቋማት የዴሞክራሲ እሴቶችን አጉልቶ በማውጣት ረገድ ሰፊ ውስንነቶች ታተውባቸዋል፡፡ባለፉት ዓመታት በርካታዎቹ የስነ-ምግባር
መርሆዎችን ሳይጠብቁ የመዘገብ ዝንባሌ ነበራቸው ተብሏል፡፡በኢትዮጵያ ስራ አመራር ኢኒስቲትዩት በተካሄደውና ከተለያዩ የህብረተሰብ
ክፍሎች የተወከሉ ዜጎች በተሳተፉበት አውደ ጥናት በዋናነት በዴሞክራሲ ስርፀት ዙሪያ የዜጎችን ምልከታ፣ አተገባበርና ተፅእኖን
በመገምገም መፍትሄን ማመላከት እንደሆነ ተገልጿል፡፡ሙሉ በሙሉ በአዲስ አበባ ከተማ ላይ ትኩረቱን ባደረገው በዚህ ጥናት ፖለቲከኞች
ሀገራዊ አጀንዳዎች ላይ በማተኮር ሀሳብ ከማመንጨት ይልቅ በግላዊ ጥቅሞቻቸው ላይ ማተኮራቸው ሀላፊነታቸውን በአግባቡ እንዳይወጡ
ማድረጉ ተነስቷል፡፡ዜጎችም ቢሆኑ ነገሮችን በሰከነ ሁኔታ ከማጤን ይልቅ በስሜታዊነት ወደ አላስፈላጊ ግጭቶች የማምራት ሁኔታ
ሲስተዋልባቸው እንደነበር ያመላከተው ጥናቱ፤ ይህም ዴሞክራሲ ስር እንዳይሰድ የራሱን ተፅዕኖ ማሳደሩን ተነስቷል፡፡በመንግስት
በኩል የታዩ ክፍተቶችንም ጥናቱ ያመላከተ ሲሆን፤ በተለይም ለአሰራር ምቹ ያልሆኑ አደረጃጀቶችን ያለመተቸት ችግር፣ በፓርቲዎች
የግል አቋም ጣልቃ መግባትን ጨምሮ የዴሞክራሲ ተቋማትን ተናበውና ተቀናጅተው እንዲሰሩ ያለማድረግ ችግሮች ተስተውለዋል፡፡በመሆኑም
ሁሉም ባለድርሻ አካላት ለዴሞክራሲ ስርፀጥ ሚናውን በአግባቡ እንዲወጣ በመድረኩ ጥሪ ቀርቧል፡፡
model-index:
- name: BERT Amharic Text Embedding Small
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.5560931357726036
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.6976111279104928
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7535530692470517
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8097973994557001
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5560931357726036
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.23253704263683098
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.15071061384941034
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08097973994557
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5560931357726036
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.6976111279104928
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.7535530692470517
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8097973994557001
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6809741097507487
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6399296105903295
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6457194091829267
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.5512549138191715
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.6924705170849713
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7393407922588449
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.807983066223163
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5512549138191715
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2308235056949904
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.14786815845176898
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.0807983066223163
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5512549138191715
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.6924705170849713
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.7393407922588449
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.807983066223163
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6754787963348321
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6336093231769057
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6391902642253109
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.5382521923193226
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.6785606289688539
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7323858482007862
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7943755669791351
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5382521923193226
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2261868763229513
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.14647716964015725
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07943755669791351
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5382521923193226
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.6785606289688539
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.7323858482007862
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7943755669791351
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6626715802528078
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6209357495644148
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6268762533351724
name: Cosine Map@100
---
# BERT Amharic Text Embedding Small
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [rasyosef/bert-small-amharic](https://huggingface.co/rasyosef/bert-small-amharic) on the json dataset. It maps sentences & paragraphs to a 512-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [rasyosef/bert-small-amharic](https://huggingface.co/rasyosef/bert-small-amharic) <!-- at revision b4480f0a1501f98c781d52935a95c02f903640c9 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 512 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("yosefw/bert-amharic-embed-small-v5")
# Run inference
sentences = [
'ኤም ቲ ኤን ለአልቃይዳና ታሊባን ጉቦ በመስጠት ተወነጀለ',
'ኩባንያው ለእነዚህ ቡድኖች ገንዘብ የሰጠው አፍጋኒስታን ውስጥ የሚገኝና ትልቅ ኢንቨስትመንት ያደረገባቸው የኔትዎርክ ታዎሮች ላይ ጥቃት እንዳይደርስበት ለጥበቃ ነው ተብሏል።\n\nበውንጀላው መቀመጫቸውን አሜሪካ ያደረጉ ሌሎች አምስት ኩባንያዎችም ያሉ ሲሆን ክሱ የቀረበው አፍጋኒስታን ውስጥ በተገደሉ የአሜሪካ ዜጎች ስም ነው።\n\n• ጃዋር ኦሮሞ ፌደራሊስት ኮንግረስን መቀላቀሉ ተረጋገጠ\n\n• ሱዳን በ29 ግለሰቦች ላይ የሞት ፍርድ አስተላለፈች\n\n• "ሙስና ለመጣው ፖለቲካዊ ለውጥ አንድ ምክንያት ነው" \n\nበቀረበው ክስ እንደተባለው ሁለቱ አሸባሪ ቡድኖች ከኤም ቲ ኤን ያገኙትን ገንዘብ እንደ አውሮፓውያኑ ከ2009 እስከ 2017 አፍጋኒስታን ውስጥ ላካሄዷቸው የጥቃት ዘመቻዎች ተጠቅመውበታል።\n\nይህ ደግሞ የአሜሪካን የፀረ ሽብር አዋጅን የሚፃረር ነው፤ ስለዚህም ኤም ቲ ኤን ይህን ህግ ተላልፏል ተብሏል።\n\nኩባንያው ግን በየትኛውም ቦታ ስራውን የሚያካሂደው ሃላፊነት በተሞላበት መንገድ እንደሆነ ገልጿል።\n\nኤም ቲ ኤን በአፍሪካ ግዙፉ፤ በአለም አቀፍ ደረጃ ደግሞ ከስምንት ትልልቅ ኩባንያዎች አንዱ ነው። ከ240 ሚሊዮን ተጠቃሚዎች በላይም አለው ተብሎ ይታመናል።\n\nበ2015 ያልተመዘገቡ ሲም ካርዶችን ባለመሰረዝ በናይጄሪያ ባለስልጣናት በቀረበበት ክስ 5 ቢሊዮን ዶላር እንዲቀጣ ተፈርዶበት፤ ከረዥም ክርክር በኋላ እንዲሁም የያኔው የደቡብ አፍሪካ ፕሬዝዳንት ጃኮም ዙማ በጉዳዩ ጣልቃ ገብተው ቅጣቱ ወደ 1.7 ቢሊዮን ዶላር እንደወረደለት የሚታወስ ነው።\n\nየዛሬ ዓመትም በኢራን የቀድሞ የደቡብ አፍሪካ አምባሳደር ኤምቲኤን ኢራን ውስጥ እንዲሰራና የ 31.6 ቢሊዮን ዶላር ፕሮጀክት እንዲያሸንፍ ጉቦ ተቀብለዋል በሚል መታሰራቸውም ይታወሳል።\n\n ',
'ባለፉት 20 አመታት ዴሞክራሲን በማስረፅ ረገድ የህዝቦችን ተሳትፎ የቃኘ ጥናት ይፋ በሆነበት ወቅት እንደተገለፀው፤ በርካታ የሚዲያ ተቋማት የዴሞክራሲ እሴቶችን አጉልቶ በማውጣት ረገድ ሰፊ ውስንነቶች ታተውባቸዋል፡፡ባለፉት ዓመታት\xa0በርካታዎቹ \xa0የስነ-ምግባር መርሆዎችን ሳይጠብቁ \xa0የመዘገብ ዝንባሌ ነበራቸው ተብሏል፡፡በኢትዮጵያ ስራ አመራር ኢኒስቲትዩት በተካሄደውና ከተለያዩ የህብረተሰብ ክፍሎች የተወከሉ ዜጎች በተሳተፉበት አውደ ጥናት በዋናነት በዴሞክራሲ ስርፀት ዙሪያ የዜጎችን ምልከታ፣ አተገባበርና ተፅእኖን በመገምገም መፍትሄን ማመላከት እንደሆነ ተገልጿል፡፡ሙሉ በሙሉ በአዲስ አበባ ከተማ ላይ ትኩረቱን ባደረገው በዚህ ጥናት ፖለቲከኞች ሀገራዊ አጀንዳዎች ላይ በማተኮር ሀሳብ ከማመንጨት ይልቅ በግላዊ ጥቅሞቻቸው ላይ ማተኮራቸው ሀላፊነታቸውን በአግባቡ እንዳይወጡ ማድረጉ ተነስቷል፡፡ዜጎችም ቢሆኑ \xa0ነገሮችን በሰከነ ሁኔታ ከማጤን ይልቅ በስሜታዊነት ወደ አላስፈላጊ ግጭቶች የማምራት ሁኔታ ሲስተዋልባቸው እንደነበር ያመላከተው\xa0ጥናቱ፤ ይህም ዴሞክራሲ ስር እንዳይሰድ የራሱን ተፅዕኖ ማሳደሩን ተነስቷል፡፡በመንግስት በኩል የታዩ ክፍተቶችንም ጥናቱ ያመላከተ ሲሆን፤ በተለይም ለአሰራር ምቹ ያልሆኑ አደረጃጀቶችን ያለመተቸት ችግር፣ በፓርቲዎች የግል አቋም ጣልቃ መግባትን ጨምሮ የዴሞክራሲ ተቋማትን ተናበውና ተቀናጅተው እንዲሰሩ ያለማድረግ ችግሮች ተስተውለዋል፡፡በመሆኑም ሁሉም ባለድርሻ አካላት ለዴሞክራሲ ስርፀጥ ሚናውን በአግባቡ እንዲወጣ በመድረኩ ጥሪ ቀርቧል፡፡',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 512]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `dim_512`, `dim_256` and `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | dim_512 | dim_256 | dim_128 |
|:--------------------|:----------|:-----------|:-----------|
| cosine_accuracy@1 | 0.5561 | 0.5513 | 0.5383 |
| cosine_accuracy@3 | 0.6976 | 0.6925 | 0.6786 |
| cosine_accuracy@5 | 0.7536 | 0.7393 | 0.7324 |
| cosine_accuracy@10 | 0.8098 | 0.808 | 0.7944 |
| cosine_precision@1 | 0.5561 | 0.5513 | 0.5383 |
| cosine_precision@3 | 0.2325 | 0.2308 | 0.2262 |
| cosine_precision@5 | 0.1507 | 0.1479 | 0.1465 |
| cosine_precision@10 | 0.081 | 0.0808 | 0.0794 |
| cosine_recall@1 | 0.5561 | 0.5513 | 0.5383 |
| cosine_recall@3 | 0.6976 | 0.6925 | 0.6786 |
| cosine_recall@5 | 0.7536 | 0.7393 | 0.7324 |
| cosine_recall@10 | 0.8098 | 0.808 | 0.7944 |
| **cosine_ndcg@10** | **0.681** | **0.6755** | **0.6627** |
| cosine_mrr@10 | 0.6399 | 0.6336 | 0.6209 |
| cosine_map@100 | 0.6457 | 0.6392 | 0.6269 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 62,833 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 16.23 tokens</li><li>max: 91 tokens</li></ul> | <ul><li>min: 35 tokens</li><li>mean: 315.61 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:--------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>የ8ኛ እና የ12ኛ ክፍል ተማሪዎች የማካካሻ ትምህርት መማር ጀመሩ።</code> | <code>ባሕር ዳር፡ ጥቅምት 10/2013 ዓ.ም (አብመድ) በባሕር ዳር ከተማ አስተዳደር ለሚገኙ የ12ኛ እና የ8ኛ ክፍል ተማሪዎች የማካካሻ ትምህርት መስጠት መጀመሩን መምሪያው አስታውቋል፡፡በባሕር ዳር ከተማ አስተዳደር ለሚገኙ ለ12ኛ እና ለ8ኛ ክፍል ተማሪዎች ሀገራዊና ሀገር አቀፍ ዜና ፈተና ከመወስዳቸው በፊት ለ45 ቀናት የሚቆይ የማካካሻ ትምህርት ከጥቅምት 09/2013 ዓ.ም ጀምሮ መስጠት መጀመሩን የከተማ አስተዳደሩ ትምህርት መምሪያ ምክትል ኃላፊ መላክ ጀመረ ተናግረዋል፡፡“ዛሬ ተቀብለን ማስተማር የጀመርነው የኮሮናቫይረስን ለመከላከል የአፍና የአፍንጫ መሸፈኛ ጭምብል የተጠቀሙ ተማሪዎችን ብቻ ነው፡፡ ከትምህርት ሚኒስቴር የተላከው ጭምብል እስከዛሬ ባይደርሰንም ወላጆች ለልጆቻቸው በገዙት ተጠቅመን ነው ማስተማር የጀመርነው” ብለዋል አቶ መላክ። መማርም ሆነ ማስተማር የሚቻለው ጤና ሲኖር ብቻ ስለሆነ ተማሪዎች ያለማንም ክትትል ጭምብል እንዲጠቀሙም ጥሪ አቅርበዋል፡፡በሚቀጥለው ሳምንት ከ1ኛ ክፍል በስተቀር ሁሉም ትምህርት ቤቶች ለሦስት ሳምንታት የማካካሻ ትምህርት እንደሚወስዱ የተናገሩት ምክትል መምሪያ ኃላፊው ከማካካሻው ትምህርት በኋላ የ2013 ትምህርት ዘመን ሙሉ በሙሉ የመማር ማስተማር ሂደቱ እንደሚቀጥል አስገንዝበዋል፡፡ወረርሽኙን ለመከላከል ሲባል ትምህርት ሚኒስቴር ባስቀመጠው አቅጣጫ መሠረት የመንግሥት ትምህርት ቤቶች ከአንድ እስከ ሦስት ፈረቃ እንዲሁም የግል ትምህርት ቤቶች ደግሞ በአንድ እና ሁለት ፈረቃ ብቻ ማስተማር እንደሚቀጥሉ አቶ መላክ ጠቁመዋል፡፡</code> |
| <code>በክልሎች በተፈጠሩ ግጭቶች ላይ ተሳትፈዋል በተባሉ 1 ሺህ 323 ተጠርጣሪዎች ላይ ክስ ተመሰረተ</code> | <code>በተለያዩ ክልሎች በተፈጠሩ ግጭቶች ላይ ተሳትፈዋል በተባሉ 1 ሺህ 323 ተጠርጣሪዎች ላይ ክስ ተመሰረተ።በ2011 በጀት ዓመት በተለያዩ ክልሎች በተፈጠሩ ግጭቶች ተሳትፈዋል በተባሉ 1 ሺህ 323 ተጠርጣሪዎች ላይ ክስ መመስረቱን የኢፌዲሪ ጠቅላይ ዐቃቤ ሕግ አስታወቀ፡፡በተፈጠረው ሁከትና ብጥብጥ 1 ሺህ 229 ሰዎች ህይዎት ያለፈ ሲሆን በ1 ሺህ 393 ሰዎች ላይ ደግሞ ቀላልና ከባድ የአካል ጉዳት ሲደርስ በ19 ሰዎች ላይ የግድያ ሙከራ መደረጉን በጠቅላይ ዐቃቤ ሕግ የተደራጁ ድንበር ተሸጋሪ ወንጀሎች ዳይሬክተር የሆኑት አቶ ፍቃዱ ፀጋ ገልፀዋል፡፡በግጭቶቹ ከ2.2 ቢሊዮን ብር በላይ የሚገመት የዜጎች ንብረት የወደመ ሲሆን፤ 1.2 ሚሊዮን ዜጎች ከመኖሪያ ቤታቸውና ከአካባቢያቸው ተፈናቅለዋል፡፡ከተከሳሾቹ መካከል 645 ተጠርጣሪዎች በቁጥጥር ስር ውለው ጉዳያቸው እየታየ ሲሆን 667 የሚሆኑ ተጠርጣሪዎች ደግሞ በቁጥጥር ስር አልዋሉም፡፡የ10 ተጠርጣሪዎች ክስም በምህረት መነሳቱን ዳይሬክተሩ አክለዋል፡፡በመጨረሻም አቶ ፍቃዱ ተጠርጣሪዎችን በቁጥጥር ስር ለማዋል በሚደረግ እንቅስቃሴ ዙሪያ የሚስተዋለው ክፍተት አስመልክቶ መፍትሔ ያሉትን ሀሳብ ሲጠቁሙ ይህንን ችግር ለመቅረፍ ህብረተሰቡና የሚመለከታቸው ባለድርሻ አካላት ከፍትህ አካላት ጎን በመቆምና ተጠርጣሪዎችን አሳልፎ በመስጠት በኩል በጉዳዩ ላይ በባለቤትነት ስሜት ሊሰሩ እንደሚገባ አጽእኖት ሰጥተው መልዕክታቸውን አስተላልፈዋል፡፡በሌላ በኩል በአማራ ክልል በጃዊ ወረዳና በመተክል ዞን፤ በጎንደርና አካባቢው በተፈጠረ ሁከትና ብጥብጥ፤ በሰሜን ሸዋ አስተዳደር እንዲሁም በቤንሻጉል ጉምዝ ክልል ከማሻ ዞን ውስጥ በሚገኙ የተለያዩ ወረዳዎችና ቀበሌዎችና የዚሁ ዞን አጎራባች በሆነው በኦሮሚያ ክልል ምስራቅና ምዕራብ ወለጋ ዞን በተለያዩ ቀ...</code> |
| <code>ከሽመና ሥራ ---- እስከ ሚሊየነርነት! </code> | <code>“ይቅርታ መጠየቅ ጀግንነት እንጂ ሽንፈት አይደለም”የኮንሶው ተወላጅ አቶ ዱላ ኩሴ፤ቤሳቤስቲን አልነበራቸውም፡፡ ለብዙ ዓመታት በሽመና ስራ ላይ ቆይተዋል፡፡ በብዙ ልፋትና ትጋት፣ወጥተው ወርደው፣ ነው ለስኬት የበቁት፡፡ ዛሬበሚሊዮን ብሮች የሚንቀሳቀሱ የንግድ ድርጅቶች ባለቤት ሆነዋል፡፡ ባለጠጋ ናቸው፡፡ የ50 ዓመቱ ጎልማሳ አቶ ዱላ፤በልጆችም ተንበሽብሸዋል፡፡ የ17 ልጆች አባት ናቸው፡፡ በቅርቡበሚዲያ የሰጡት አንድ አስተያየት የአገሬውን ህዝብ ማስቆጣቱን የሚናገሩት ባለሃብቱ፤አሁን በሽማግሌ እርቅ ለመፍጠር እየተሞከረ መሆኑን ጠቁመዋል፡፡ ባለሃብቱ ከህዝቡ ጋር ቅራኔውስጥ የከተታቸው ጉዳይ ምን ይሆን? የአዲስ አድማስ ጋዜጠኛ ማህሌት ኪዳነወልድ፤ ከአቶ ዱላ ኩሴ ጋር ይሄን ጨምሮ በስኬት ጉዟቸውና በንግድ ሥራቸው ዙሪያ አነጋግራቸዋለች፡፡መቼ ነው የሽመና ሥራ የጀመሩት?በ13 ወይም በ14 ዓመቴ ይመስለኛል፡፡ ለቤተሰቤ አራተኛ ልጅ ነኝ፡፡ ለ10 ዓመታት ያህል በሽመና ስራ ላይ ቆይቻለሁ፡፡ ስራዎቼንም የምሸጠው እዛው በአካባቢው ላሉ ሰዎች ነበር፡፡ ቀጣዩ ሥራዎስ ምን ነበር?ወደ ጅንካ በመሄድ ለ4 ዓመታት ያህል ኦሞ ዞን ጂንካ ከተማ ላይ ሽያጩን ቀጠልኩ፡፡ በኋላም ወደ ወላይታ ተመልሼ፣ ማግና ሰዴቦራ /ብርድ ቦታ የሚለበስ የጋቢ አይነት/ መሸጥ ጀመርኩ፡፡ ለ3 ዓመታትም ቦዲቲ እየወሰድኩ ሸጫለሁ፡፡ እንግዲህ አቅም እየጠነከረ፣ ገንዘብ እየተሰበሰበ ሲመጣ፣ አነስተኛ ሸቀጣ ሸቀጥ ሱቅ ከፈትኩኝ፡፡ የቤት እቃና ልብስ መሸጥ ጀመርኩኝ፡፡ ብዙም ሳልቆይ ወደ ከተማ ወርጄ፣ ወደ ሆቴል ስራ ገባሁ፡፡ ተቀጥረው ነው ወይስ የራስዎን ሆቴል?የራሴን ነው፡፡ ኮንሶ እድገት ሆቴል ይባላል፡፡ በ91 ዓመተ ምህረት ነበር ሆቴሉን አነስ አድርጌ የከፈትኩት፡፡ በኋላም የሸቀጣሸቀጥ ገበያው እየተቀዛቀዘ በ...</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
512,
256,
128
],
"matryoshka_weights": [
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `num_train_epochs`: 5
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `fp16`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 |
|:-------:|:--------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|
| 0.0204 | 10 | 8.5099 | - | - | - |
| 0.0407 | 20 | 7.5508 | - | - | - |
| 0.0611 | 30 | 6.0747 | - | - | - |
| 0.0815 | 40 | 4.8277 | - | - | - |
| 0.1018 | 50 | 4.1868 | - | - | - |
| 0.1222 | 60 | 3.0626 | - | - | - |
| 0.1426 | 70 | 2.9647 | - | - | - |
| 0.1629 | 80 | 2.3808 | - | - | - |
| 0.1833 | 90 | 2.1637 | - | - | - |
| 0.2037 | 100 | 2.0235 | - | - | - |
| 0.2240 | 110 | 2.0617 | - | - | - |
| 0.2444 | 120 | 1.7025 | - | - | - |
| 0.2648 | 130 | 1.7932 | - | - | - |
| 0.2851 | 140 | 1.706 | - | - | - |
| 0.3055 | 150 | 1.4933 | - | - | - |
| 0.3259 | 160 | 1.5664 | - | - | - |
| 0.3462 | 170 | 1.4154 | - | - | - |
| 0.3666 | 180 | 1.5204 | - | - | - |
| 0.3870 | 190 | 1.3972 | - | - | - |
| 0.4073 | 200 | 1.3655 | - | - | - |
| 0.4277 | 210 | 1.4794 | - | - | - |
| 0.4481 | 220 | 1.3696 | - | - | - |
| 0.4684 | 230 | 1.2406 | - | - | - |
| 0.4888 | 240 | 1.2307 | - | - | - |
| 0.5092 | 250 | 1.2954 | - | - | - |
| 0.5295 | 260 | 1.3026 | - | - | - |
| 0.5499 | 270 | 1.2405 | - | - | - |
| 0.5703 | 280 | 1.237 | - | - | - |
| 0.5906 | 290 | 1.2187 | - | - | - |
| 0.6110 | 300 | 1.0976 | - | - | - |
| 0.6314 | 310 | 1.1978 | - | - | - |
| 0.6517 | 320 | 1.1269 | - | - | - |
| 0.6721 | 330 | 1.0751 | - | - | - |
| 0.6925 | 340 | 0.9505 | - | - | - |
| 0.7128 | 350 | 1.1134 | - | - | - |
| 0.7332 | 360 | 1.0918 | - | - | - |
| 0.7536 | 370 | 0.9427 | - | - | - |
| 0.7739 | 380 | 1.0422 | - | - | - |
| 0.7943 | 390 | 1.0331 | - | - | - |
| 0.8147 | 400 | 0.9231 | - | - | - |
| 0.8350 | 410 | 0.9498 | - | - | - |
| 0.8554 | 420 | 0.9848 | - | - | - |
| 0.8758 | 430 | 0.9296 | - | - | - |
| 0.8961 | 440 | 1.0072 | - | - | - |
| 0.9165 | 450 | 0.8376 | - | - | - |
| 0.9369 | 460 | 0.935 | - | - | - |
| 0.9572 | 470 | 0.7741 | - | - | - |
| 0.9776 | 480 | 0.8884 | - | - | - |
| 0.9980 | 490 | 0.8525 | - | - | - |
| 1.0 | 491 | - | 0.5987 | 0.5911 | 0.5708 |
| 1.0183 | 500 | 0.732 | - | - | - |
| 1.0387 | 510 | 0.6374 | - | - | - |
| 1.0591 | 520 | 0.642 | - | - | - |
| 1.0794 | 530 | 0.651 | - | - | - |
| 1.0998 | 540 | 0.5616 | - | - | - |
| 1.1202 | 550 | 0.6751 | - | - | - |
| 1.1405 | 560 | 0.5725 | - | - | - |
| 1.1609 | 570 | 0.6779 | - | - | - |
| 1.1813 | 580 | 0.6126 | - | - | - |
| 1.2016 | 590 | 0.6333 | - | - | - |
| 1.2220 | 600 | 0.6144 | - | - | - |
| 1.2424 | 610 | 0.5795 | - | - | - |
| 1.2627 | 620 | 0.6421 | - | - | - |
| 1.2831 | 630 | 0.6581 | - | - | - |
| 1.3035 | 640 | 0.5602 | - | - | - |
| 1.3238 | 650 | 0.5432 | - | - | - |
| 1.3442 | 660 | 0.5863 | - | - | - |
| 1.3646 | 670 | 0.6106 | - | - | - |
| 1.3849 | 680 | 0.5948 | - | - | - |
| 1.4053 | 690 | 0.6029 | - | - | - |
| 1.4257 | 700 | 0.5935 | - | - | - |
| 1.4460 | 710 | 0.4886 | - | - | - |
| 1.4664 | 720 | 0.5825 | - | - | - |
| 1.4868 | 730 | 0.5732 | - | - | - |
| 1.5071 | 740 | 0.4534 | - | - | - |
| 1.5275 | 750 | 0.5513 | - | - | - |
| 1.5479 | 760 | 0.5679 | - | - | - |
| 1.5682 | 770 | 0.6028 | - | - | - |
| 1.5886 | 780 | 0.5002 | - | - | - |
| 1.6090 | 790 | 0.5362 | - | - | - |
| 1.6293 | 800 | 0.5899 | - | - | - |
| 1.6497 | 810 | 0.5807 | - | - | - |
| 1.6701 | 820 | 0.4952 | - | - | - |
| 1.6904 | 830 | 0.5656 | - | - | - |
| 1.7108 | 840 | 0.5707 | - | - | - |
| 1.7312 | 850 | 0.5203 | - | - | - |
| 1.7515 | 860 | 0.5939 | - | - | - |
| 1.7719 | 870 | 0.447 | - | - | - |
| 1.7923 | 880 | 0.493 | - | - | - |
| 1.8126 | 890 | 0.4972 | - | - | - |
| 1.8330 | 900 | 0.5028 | - | - | - |
| 1.8534 | 910 | 0.6546 | - | - | - |
| 1.8737 | 920 | 0.5032 | - | - | - |
| 1.8941 | 930 | 0.5613 | - | - | - |
| 1.9145 | 940 | 0.4963 | - | - | - |
| 1.9348 | 950 | 0.512 | - | - | - |
| 1.9552 | 960 | 0.5305 | - | - | - |
| 1.9756 | 970 | 0.5705 | - | - | - |
| 1.9959 | 980 | 0.4803 | - | - | - |
| 2.0 | 982 | - | 0.6536 | 0.6459 | 0.6266 |
| 2.0163 | 990 | 0.3358 | - | - | - |
| 2.0367 | 1000 | 0.2909 | - | - | - |
| 2.0570 | 1010 | 0.3514 | - | - | - |
| 2.0774 | 1020 | 0.3245 | - | - | - |
| 2.0978 | 1030 | 0.2989 | - | - | - |
| 2.1181 | 1040 | 0.3315 | - | - | - |
| 2.1385 | 1050 | 0.3156 | - | - | - |
| 2.1589 | 1060 | 0.3156 | - | - | - |
| 2.1792 | 1070 | 0.2989 | - | - | - |
| 2.1996 | 1080 | 0.2878 | - | - | - |
| 2.2200 | 1090 | 0.327 | - | - | - |
| 2.2403 | 1100 | 0.3278 | - | - | - |
| 2.2607 | 1110 | 0.3013 | - | - | - |
| 2.2811 | 1120 | 0.3342 | - | - | - |
| 2.3014 | 1130 | 0.2753 | - | - | - |
| 2.3218 | 1140 | 0.3339 | - | - | - |
| 2.3422 | 1150 | 0.3394 | - | - | - |
| 2.3625 | 1160 | 0.3122 | - | - | - |
| 2.3829 | 1170 | 0.3322 | - | - | - |
| 2.4033 | 1180 | 0.3354 | - | - | - |
| 2.4236 | 1190 | 0.2837 | - | - | - |
| 2.4440 | 1200 | 0.3303 | - | - | - |
| 2.4644 | 1210 | 0.3136 | - | - | - |
| 2.4847 | 1220 | 0.2824 | - | - | - |
| 2.5051 | 1230 | 0.3052 | - | - | - |
| 2.5255 | 1240 | 0.3075 | - | - | - |
| 2.5458 | 1250 | 0.3123 | - | - | - |
| 2.5662 | 1260 | 0.3347 | - | - | - |
| 2.5866 | 1270 | 0.2738 | - | - | - |
| 2.6069 | 1280 | 0.3219 | - | - | - |
| 2.6273 | 1290 | 0.3354 | - | - | - |
| 2.6477 | 1300 | 0.3056 | - | - | - |
| 2.6680 | 1310 | 0.3494 | - | - | - |
| 2.6884 | 1320 | 0.3286 | - | - | - |
| 2.7088 | 1330 | 0.2915 | - | - | - |
| 2.7291 | 1340 | 0.3158 | - | - | - |
| 2.7495 | 1350 | 0.3012 | - | - | - |
| 2.7699 | 1360 | 0.2816 | - | - | - |
| 2.7902 | 1370 | 0.2399 | - | - | - |
| 2.8106 | 1380 | 0.2449 | - | - | - |
| 2.8310 | 1390 | 0.3156 | - | - | - |
| 2.8513 | 1400 | 0.2994 | - | - | - |
| 2.8717 | 1410 | 0.335 | - | - | - |
| 2.8921 | 1420 | 0.33 | - | - | - |
| 2.9124 | 1430 | 0.2825 | - | - | - |
| 2.9328 | 1440 | 0.2933 | - | - | - |
| 2.9532 | 1450 | 0.2607 | - | - | - |
| 2.9735 | 1460 | 0.3466 | - | - | - |
| 2.9939 | 1470 | 0.3064 | - | - | - |
| 3.0 | 1473 | - | 0.6750 | 0.6674 | 0.6555 |
| 3.0143 | 1480 | 0.2303 | - | - | - |
| 3.0346 | 1490 | 0.207 | - | - | - |
| 3.0550 | 1500 | 0.2116 | - | - | - |
| 3.0754 | 1510 | 0.2179 | - | - | - |
| 3.0957 | 1520 | 0.204 | - | - | - |
| 3.1161 | 1530 | 0.1995 | - | - | - |
| 3.1365 | 1540 | 0.2077 | - | - | - |
| 3.1568 | 1550 | 0.2089 | - | - | - |
| 3.1772 | 1560 | 0.2135 | - | - | - |
| 3.1976 | 1570 | 0.2098 | - | - | - |
| 3.2179 | 1580 | 0.181 | - | - | - |
| 3.2383 | 1590 | 0.2247 | - | - | - |
| 3.2587 | 1600 | 0.2053 | - | - | - |
| 3.2790 | 1610 | 0.2124 | - | - | - |
| 3.2994 | 1620 | 0.231 | - | - | - |
| 3.3198 | 1630 | 0.189 | - | - | - |
| 3.3401 | 1640 | 0.2111 | - | - | - |
| 3.3605 | 1650 | 0.2238 | - | - | - |
| 3.3809 | 1660 | 0.1892 | - | - | - |
| 3.4012 | 1670 | 0.2265 | - | - | - |
| 3.4216 | 1680 | 0.2129 | - | - | - |
| 3.4420 | 1690 | 0.1952 | - | - | - |
| 3.4623 | 1700 | 0.1961 | - | - | - |
| 3.4827 | 1710 | 0.1774 | - | - | - |
| 3.5031 | 1720 | 0.205 | - | - | - |
| 3.5234 | 1730 | 0.2328 | - | - | - |
| 3.5438 | 1740 | 0.2092 | - | - | - |
| 3.5642 | 1750 | 0.2347 | - | - | - |
| 3.5845 | 1760 | 0.1783 | - | - | - |
| 3.6049 | 1770 | 0.2023 | - | - | - |
| 3.6253 | 1780 | 0.1829 | - | - | - |
| 3.6456 | 1790 | 0.2084 | - | - | - |
| 3.6660 | 1800 | 0.2091 | - | - | - |
| 3.6864 | 1810 | 0.2172 | - | - | - |
| 3.7067 | 1820 | 0.1661 | - | - | - |
| 3.7271 | 1830 | 0.1854 | - | - | - |
| 3.7475 | 1840 | 0.1963 | - | - | - |
| 3.7678 | 1850 | 0.2228 | - | - | - |
| 3.7882 | 1860 | 0.216 | - | - | - |
| 3.8086 | 1870 | 0.1956 | - | - | - |
| 3.8289 | 1880 | 0.2019 | - | - | - |
| 3.8493 | 1890 | 0.1932 | - | - | - |
| 3.8697 | 1900 | 0.2192 | - | - | - |
| 3.8900 | 1910 | 0.2275 | - | - | - |
| 3.9104 | 1920 | 0.1971 | - | - | - |
| 3.9308 | 1930 | 0.1662 | - | - | - |
| 3.9511 | 1940 | 0.1979 | - | - | - |
| 3.9715 | 1950 | 0.1531 | - | - | - |
| 3.9919 | 1960 | 0.223 | - | - | - |
| 4.0 | 1964 | - | 0.6803 | 0.6742 | 0.6616 |
| 4.0122 | 1970 | 0.1989 | - | - | - |
| 4.0326 | 1980 | 0.164 | - | - | - |
| 4.0530 | 1990 | 0.1588 | - | - | - |
| 4.0733 | 2000 | 0.1683 | - | - | - |
| 4.0937 | 2010 | 0.1672 | - | - | - |
| 4.1141 | 2020 | 0.1677 | - | - | - |
| 4.1344 | 2030 | 0.1603 | - | - | - |
| 4.1548 | 2040 | 0.1973 | - | - | - |
| 4.1752 | 2050 | 0.147 | - | - | - |
| 4.1955 | 2060 | 0.1718 | - | - | - |
| 4.2159 | 2070 | 0.1647 | - | - | - |
| 4.2363 | 2080 | 0.1818 | - | - | - |
| 4.2566 | 2090 | 0.1762 | - | - | - |
| 4.2770 | 2100 | 0.1645 | - | - | - |
| 4.2974 | 2110 | 0.1407 | - | - | - |
| 4.3177 | 2120 | 0.1444 | - | - | - |
| 4.3381 | 2130 | 0.1721 | - | - | - |
| 4.3585 | 2140 | 0.175 | - | - | - |
| 4.3788 | 2150 | 0.1755 | - | - | - |
| 4.3992 | 2160 | 0.1959 | - | - | - |
| 4.4196 | 2170 | 0.1908 | - | - | - |
| 4.4399 | 2180 | 0.1834 | - | - | - |
| 4.4603 | 2190 | 0.155 | - | - | - |
| 4.4807 | 2200 | 0.1551 | - | - | - |
| 4.5010 | 2210 | 0.1785 | - | - | - |
| 4.5214 | 2220 | 0.1305 | - | - | - |
| 4.5418 | 2230 | 0.1573 | - | - | - |
| 4.5621 | 2240 | 0.1659 | - | - | - |
| 4.5825 | 2250 | 0.1622 | - | - | - |
| 4.6029 | 2260 | 0.1529 | - | - | - |
| 4.6232 | 2270 | 0.1792 | - | - | - |
| 4.6436 | 2280 | 0.1764 | - | - | - |
| 4.6640 | 2290 | 0.1327 | - | - | - |
| 4.6843 | 2300 | 0.1463 | - | - | - |
| 4.7047 | 2310 | 0.1783 | - | - | - |
| 4.7251 | 2320 | 0.175 | - | - | - |
| 4.7454 | 2330 | 0.156 | - | - | - |
| 4.7658 | 2340 | 0.1682 | - | - | - |
| 4.7862 | 2350 | 0.1815 | - | - | - |
| 4.8065 | 2360 | 0.1829 | - | - | - |
| 4.8269 | 2370 | 0.1742 | - | - | - |
| 4.8473 | 2380 | 0.1425 | - | - | - |
| 4.8676 | 2390 | 0.1565 | - | - | - |
| 4.8880 | 2400 | 0.1941 | - | - | - |
| 4.9084 | 2410 | 0.1721 | - | - | - |
| 4.9287 | 2420 | 0.1767 | - | - | - |
| 4.9491 | 2430 | 0.166 | - | - | - |
| 4.9695 | 2440 | 0.1776 | - | - | - |
| 4.9898 | 2450 | 0.1628 | - | - | - |
| **5.0** | **2455** | **-** | **0.681** | **0.6755** | **0.6627** |
* The bold row denotes the saved checkpoint.
</details>
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.6.0+cu124
- Accelerate: 1.2.1
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# BERT Amharic Text Embedding Small
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [rasyosef/bert-small-amharic](https://huggingface.co/rasyosef/bert-small-amharic) on the json dataset. It maps sentences & paragraphs to a 512-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [rasyosef/bert-small-amharic](https://huggingface.co/rasyosef/bert-small-amharic) <!-- at revision b4480f0a1501f98c781d52935a95c02f903640c9 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 512 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("yosefw/bert-amharic-embed-small-v5")
# Run inference
sentences = [
'ኤም ቲ ኤን ለአልቃይዳና ታሊባን ጉቦ በመስጠት ተወነጀለ',
'ኩባንያው ለእነዚህ ቡድኖች ገንዘብ የሰጠው አፍጋኒስታን ውስጥ የሚገኝና ትልቅ ኢንቨስትመንት ያደረገባቸው የኔትዎርክ ታዎሮች ላይ ጥቃት እንዳይደርስበት ለጥበቃ ነው ተብሏል።\n\nበውንጀላው መቀመጫቸውን አሜሪካ ያደረጉ ሌሎች አምስት ኩባንያዎችም ያሉ ሲሆን ክሱ የቀረበው አፍጋኒስታን ውስጥ በተገደሉ የአሜሪካ ዜጎች ስም ነው።\n\n• ጃዋር ኦሮሞ ፌደራሊስት ኮንግረስን መቀላቀሉ ተረጋገጠ\n\n• ሱዳን በ29 ግለሰቦች ላይ የሞት ፍርድ አስተላለፈች\n\n• "ሙስና ለመጣው ፖለቲካዊ ለውጥ አንድ ምክንያት ነው" \n\nበቀረበው ክስ እንደተባለው ሁለቱ አሸባሪ ቡድኖች ከኤም ቲ ኤን ያገኙትን ገንዘብ እንደ አውሮፓውያኑ ከ2009 እስከ 2017 አፍጋኒስታን ውስጥ ላካሄዷቸው የጥቃት ዘመቻዎች ተጠቅመውበታል።\n\nይህ ደግሞ የአሜሪካን የፀረ ሽብር አዋጅን የሚፃረር ነው፤ ስለዚህም ኤም ቲ ኤን ይህን ህግ ተላልፏል ተብሏል።\n\nኩባንያው ግን በየትኛውም ቦታ ስራውን የሚያካሂደው ሃላፊነት በተሞላበት መንገድ እንደሆነ ገልጿል።\n\nኤም ቲ ኤን በአፍሪካ ግዙፉ፤ በአለም አቀፍ ደረጃ ደግሞ ከስምንት ትልልቅ ኩባንያዎች አንዱ ነው። ከ240 ሚሊዮን ተጠቃሚዎች በላይም አለው ተብሎ ይታመናል።\n\nበ2015 ያልተመዘገቡ ሲም ካርዶችን ባለመሰረዝ በናይጄሪያ ባለስልጣናት በቀረበበት ክስ 5 ቢሊዮን ዶላር እንዲቀጣ ተፈርዶበት፤ ከረዥም ክርክር በኋላ እንዲሁም የያኔው የደቡብ አፍሪካ ፕሬዝዳንት ጃኮም ዙማ በጉዳዩ ጣልቃ ገብተው ቅጣቱ ወደ 1.7 ቢሊዮን ዶላር እንደወረደለት የሚታወስ ነው።\n\nየዛሬ ዓመትም በኢራን የቀድሞ የደቡብ አፍሪካ አምባሳደር ኤምቲኤን ኢራን ውስጥ እንዲሰራና የ 31.6 ቢሊዮን ዶላር ፕሮጀክት እንዲያሸንፍ ጉቦ ተቀብለዋል በሚል መታሰራቸውም ይታወሳል።\n\n ',
'ባለፉት 20 አመታት ዴሞክራሲን በማስረፅ ረገድ የህዝቦችን ተሳትፎ የቃኘ ጥናት ይፋ በሆነበት ወቅት እንደተገለፀው፤ በርካታ የሚዲያ ተቋማት የዴሞክራሲ እሴቶችን አጉልቶ በማውጣት ረገድ ሰፊ ውስንነቶች ታተውባቸዋል፡፡ባለፉት ዓመታት\xa0በርካታዎቹ \xa0የስነ-ምግባር መርሆዎችን ሳይጠብቁ \xa0የመዘገብ ዝንባሌ ነበራቸው ተብሏል፡፡በኢትዮጵያ ስራ አመራር ኢኒስቲትዩት በተካሄደውና ከተለያዩ የህብረተሰብ ክፍሎች የተወከሉ ዜጎች በተሳተፉበት አውደ ጥናት በዋናነት በዴሞክራሲ ስርፀት ዙሪያ የዜጎችን ምልከታ፣ አተገባበርና ተፅእኖን በመገምገም መፍትሄን ማመላከት እንደሆነ ተገልጿል፡፡ሙሉ በሙሉ በአዲስ አበባ ከተማ ላይ ትኩረቱን ባደረገው በዚህ ጥናት ፖለቲከኞች ሀገራዊ አጀንዳዎች ላይ በማተኮር ሀሳብ ከማመንጨት ይልቅ በግላዊ ጥቅሞቻቸው ላይ ማተኮራቸው ሀላፊነታቸውን በአግባቡ እንዳይወጡ ማድረጉ ተነስቷል፡፡ዜጎችም ቢሆኑ \xa0ነገሮችን በሰከነ ሁኔታ ከማጤን ይልቅ በስሜታዊነት ወደ አላስፈላጊ ግጭቶች የማምራት ሁኔታ ሲስተዋልባቸው እንደነበር ያመላከተው\xa0ጥናቱ፤ ይህም ዴሞክራሲ ስር እንዳይሰድ የራሱን ተፅዕኖ ማሳደሩን ተነስቷል፡፡በመንግስት በኩል የታዩ ክፍተቶችንም ጥናቱ ያመላከተ ሲሆን፤ በተለይም ለአሰራር ምቹ ያልሆኑ አደረጃጀቶችን ያለመተቸት ችግር፣ በፓርቲዎች የግል አቋም ጣልቃ መግባትን ጨምሮ የዴሞክራሲ ተቋማትን ተናበውና ተቀናጅተው እንዲሰሩ ያለማድረግ ችግሮች ተስተውለዋል፡፡በመሆኑም ሁሉም ባለድርሻ አካላት ለዴሞክራሲ ስርፀጥ ሚናውን በአግባቡ እንዲወጣ በመድረኩ ጥሪ ቀርቧል፡፡',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 512]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `dim_512`, `dim_256` and `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | dim_512 | dim_256 | dim_128 |
|:--------------------|:----------|:-----------|:-----------|
| cosine_accuracy@1 | 0.5561 | 0.5513 | 0.5383 |
| cosine_accuracy@3 | 0.6976 | 0.6925 | 0.6786 |
| cosine_accuracy@5 | 0.7536 | 0.7393 | 0.7324 |
| cosine_accuracy@10 | 0.8098 | 0.808 | 0.7944 |
| cosine_precision@1 | 0.5561 | 0.5513 | 0.5383 |
| cosine_precision@3 | 0.2325 | 0.2308 | 0.2262 |
| cosine_precision@5 | 0.1507 | 0.1479 | 0.1465 |
| cosine_precision@10 | 0.081 | 0.0808 | 0.0794 |
| cosine_recall@1 | 0.5561 | 0.5513 | 0.5383 |
| cosine_recall@3 | 0.6976 | 0.6925 | 0.6786 |
| cosine_recall@5 | 0.7536 | 0.7393 | 0.7324 |
| cosine_recall@10 | 0.8098 | 0.808 | 0.7944 |
| **cosine_ndcg@10** | **0.681** | **0.6755** | **0.6627** |
| cosine_mrr@10 | 0.6399 | 0.6336 | 0.6209 |
| cosine_map@100 | 0.6457 | 0.6392 | 0.6269 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 62,833 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 16.23 tokens</li><li>max: 91 tokens</li></ul> | <ul><li>min: 35 tokens</li><li>mean: 315.61 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:--------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>የ8ኛ እና የ12ኛ ክፍል ተማሪዎች የማካካሻ ትምህርት መማር ጀመሩ።</code> | <code>ባሕር ዳር፡ ጥቅምት 10/2013 ዓ.ም (አብመድ) በባሕር ዳር ከተማ አስተዳደር ለሚገኙ የ12ኛ እና የ8ኛ ክፍል ተማሪዎች የማካካሻ ትምህርት መስጠት መጀመሩን መምሪያው አስታውቋል፡፡በባሕር ዳር ከተማ አስተዳደር ለሚገኙ ለ12ኛ እና ለ8ኛ ክፍል ተማሪዎች ሀገራዊና ሀገር አቀፍ ዜና ፈተና ከመወስዳቸው በፊት ለ45 ቀናት የሚቆይ የማካካሻ ትምህርት ከጥቅምት 09/2013 ዓ.ም ጀምሮ መስጠት መጀመሩን የከተማ አስተዳደሩ ትምህርት መምሪያ ምክትል ኃላፊ መላክ ጀመረ ተናግረዋል፡፡“ዛሬ ተቀብለን ማስተማር የጀመርነው የኮሮናቫይረስን ለመከላከል የአፍና የአፍንጫ መሸፈኛ ጭምብል የተጠቀሙ ተማሪዎችን ብቻ ነው፡፡ ከትምህርት ሚኒስቴር የተላከው ጭምብል እስከዛሬ ባይደርሰንም ወላጆች ለልጆቻቸው በገዙት ተጠቅመን ነው ማስተማር የጀመርነው” ብለዋል አቶ መላክ። መማርም ሆነ ማስተማር የሚቻለው ጤና ሲኖር ብቻ ስለሆነ ተማሪዎች ያለማንም ክትትል ጭምብል እንዲጠቀሙም ጥሪ አቅርበዋል፡፡በሚቀጥለው ሳምንት ከ1ኛ ክፍል በስተቀር ሁሉም ትምህርት ቤቶች ለሦስት ሳምንታት የማካካሻ ትምህርት እንደሚወስዱ የተናገሩት ምክትል መምሪያ ኃላፊው ከማካካሻው ትምህርት በኋላ የ2013 ትምህርት ዘመን ሙሉ በሙሉ የመማር ማስተማር ሂደቱ እንደሚቀጥል አስገንዝበዋል፡፡ወረርሽኙን ለመከላከል ሲባል ትምህርት ሚኒስቴር ባስቀመጠው አቅጣጫ መሠረት የመንግሥት ትምህርት ቤቶች ከአንድ እስከ ሦስት ፈረቃ እንዲሁም የግል ትምህርት ቤቶች ደግሞ በአንድ እና ሁለት ፈረቃ ብቻ ማስተማር እንደሚቀጥሉ አቶ መላክ ጠቁመዋል፡፡</code> |
| <code>በክልሎች በተፈጠሩ ግጭቶች ላይ ተሳትፈዋል በተባሉ 1 ሺህ 323 ተጠርጣሪዎች ላይ ክስ ተመሰረተ</code> | <code>በተለያዩ ክልሎች በተፈጠሩ ግጭቶች ላይ ተሳትፈዋል በተባሉ 1 ሺህ 323 ተጠርጣሪዎች ላይ ክስ ተመሰረተ።በ2011 በጀት ዓመት በተለያዩ ክልሎች በተፈጠሩ ግጭቶች ተሳትፈዋል በተባሉ 1 ሺህ 323 ተጠርጣሪዎች ላይ ክስ መመስረቱን የኢፌዲሪ ጠቅላይ ዐቃቤ ሕግ አስታወቀ፡፡በተፈጠረው ሁከትና ብጥብጥ 1 ሺህ 229 ሰዎች ህይዎት ያለፈ ሲሆን በ1 ሺህ 393 ሰዎች ላይ ደግሞ ቀላልና ከባድ የአካል ጉዳት ሲደርስ በ19 ሰዎች ላይ የግድያ ሙከራ መደረጉን በጠቅላይ ዐቃቤ ሕግ የተደራጁ ድንበር ተሸጋሪ ወንጀሎች ዳይሬክተር የሆኑት አቶ ፍቃዱ ፀጋ ገልፀዋል፡፡በግጭቶቹ ከ2.2 ቢሊዮን ብር በላይ የሚገመት የዜጎች ንብረት የወደመ ሲሆን፤ 1.2 ሚሊዮን ዜጎች ከመኖሪያ ቤታቸውና ከአካባቢያቸው ተፈናቅለዋል፡፡ከተከሳሾቹ መካከል 645 ተጠርጣሪዎች በቁጥጥር ስር ውለው ጉዳያቸው እየታየ ሲሆን 667 የሚሆኑ ተጠርጣሪዎች ደግሞ በቁጥጥር ስር አልዋሉም፡፡የ10 ተጠርጣሪዎች ክስም በምህረት መነሳቱን ዳይሬክተሩ አክለዋል፡፡በመጨረሻም አቶ ፍቃዱ ተጠርጣሪዎችን በቁጥጥር ስር ለማዋል በሚደረግ እንቅስቃሴ ዙሪያ የሚስተዋለው ክፍተት አስመልክቶ መፍትሔ ያሉትን ሀሳብ ሲጠቁሙ ይህንን ችግር ለመቅረፍ ህብረተሰቡና የሚመለከታቸው ባለድርሻ አካላት ከፍትህ አካላት ጎን በመቆምና ተጠርጣሪዎችን አሳልፎ በመስጠት በኩል በጉዳዩ ላይ በባለቤትነት ስሜት ሊሰሩ እንደሚገባ አጽእኖት ሰጥተው መልዕክታቸውን አስተላልፈዋል፡፡በሌላ በኩል በአማራ ክልል በጃዊ ወረዳና በመተክል ዞን፤ በጎንደርና አካባቢው በተፈጠረ ሁከትና ብጥብጥ፤ በሰሜን ሸዋ አስተዳደር እንዲሁም በቤንሻጉል ጉምዝ ክልል ከማሻ ዞን ውስጥ በሚገኙ የተለያዩ ወረዳዎችና ቀበሌዎችና የዚሁ ዞን አጎራባች በሆነው በኦሮሚያ ክልል ምስራቅና ምዕራብ ወለጋ ዞን በተለያዩ ቀ...</code> |
| <code>ከሽመና ሥራ ---- እስከ ሚሊየነርነት! </code> | <code>“ይቅርታ መጠየቅ ጀግንነት እንጂ ሽንፈት አይደለም”የኮንሶው ተወላጅ አቶ ዱላ ኩሴ፤ቤሳቤስቲን አልነበራቸውም፡፡ ለብዙ ዓመታት በሽመና ስራ ላይ ቆይተዋል፡፡ በብዙ ልፋትና ትጋት፣ወጥተው ወርደው፣ ነው ለስኬት የበቁት፡፡ ዛሬበሚሊዮን ብሮች የሚንቀሳቀሱ የንግድ ድርጅቶች ባለቤት ሆነዋል፡፡ ባለጠጋ ናቸው፡፡ የ50 ዓመቱ ጎልማሳ አቶ ዱላ፤በልጆችም ተንበሽብሸዋል፡፡ የ17 ልጆች አባት ናቸው፡፡ በቅርቡበሚዲያ የሰጡት አንድ አስተያየት የአገሬውን ህዝብ ማስቆጣቱን የሚናገሩት ባለሃብቱ፤አሁን በሽማግሌ እርቅ ለመፍጠር እየተሞከረ መሆኑን ጠቁመዋል፡፡ ባለሃብቱ ከህዝቡ ጋር ቅራኔውስጥ የከተታቸው ጉዳይ ምን ይሆን? የአዲስ አድማስ ጋዜጠኛ ማህሌት ኪዳነወልድ፤ ከአቶ ዱላ ኩሴ ጋር ይሄን ጨምሮ በስኬት ጉዟቸውና በንግድ ሥራቸው ዙሪያ አነጋግራቸዋለች፡፡መቼ ነው የሽመና ሥራ የጀመሩት?በ13 ወይም በ14 ዓመቴ ይመስለኛል፡፡ ለቤተሰቤ አራተኛ ልጅ ነኝ፡፡ ለ10 ዓመታት ያህል በሽመና ስራ ላይ ቆይቻለሁ፡፡ ስራዎቼንም የምሸጠው እዛው በአካባቢው ላሉ ሰዎች ነበር፡፡ ቀጣዩ ሥራዎስ ምን ነበር?ወደ ጅንካ በመሄድ ለ4 ዓመታት ያህል ኦሞ ዞን ጂንካ ከተማ ላይ ሽያጩን ቀጠልኩ፡፡ በኋላም ወደ ወላይታ ተመልሼ፣ ማግና ሰዴቦራ /ብርድ ቦታ የሚለበስ የጋቢ አይነት/ መሸጥ ጀመርኩ፡፡ ለ3 ዓመታትም ቦዲቲ እየወሰድኩ ሸጫለሁ፡፡ እንግዲህ አቅም እየጠነከረ፣ ገንዘብ እየተሰበሰበ ሲመጣ፣ አነስተኛ ሸቀጣ ሸቀጥ ሱቅ ከፈትኩኝ፡፡ የቤት እቃና ልብስ መሸጥ ጀመርኩኝ፡፡ ብዙም ሳልቆይ ወደ ከተማ ወርጄ፣ ወደ ሆቴል ስራ ገባሁ፡፡ ተቀጥረው ነው ወይስ የራስዎን ሆቴል?የራሴን ነው፡፡ ኮንሶ እድገት ሆቴል ይባላል፡፡ በ91 ዓመተ ምህረት ነበር ሆቴሉን አነስ አድርጌ የከፈትኩት፡፡ በኋላም የሸቀጣሸቀጥ ገበያው እየተቀዛቀዘ በ...</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
512,
256,
128
],
"matryoshka_weights": [
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `num_train_epochs`: 5
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `fp16`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 |
|:-------:|:--------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|
| 0.0204 | 10 | 8.5099 | - | - | - |
| 0.0407 | 20 | 7.5508 | - | - | - |
| 0.0611 | 30 | 6.0747 | - | - | - |
| 0.0815 | 40 | 4.8277 | - | - | - |
| 0.1018 | 50 | 4.1868 | - | - | - |
| 0.1222 | 60 | 3.0626 | - | - | - |
| 0.1426 | 70 | 2.9647 | - | - | - |
| 0.1629 | 80 | 2.3808 | - | - | - |
| 0.1833 | 90 | 2.1637 | - | - | - |
| 0.2037 | 100 | 2.0235 | - | - | - |
| 0.2240 | 110 | 2.0617 | - | - | - |
| 0.2444 | 120 | 1.7025 | - | - | - |
| 0.2648 | 130 | 1.7932 | - | - | - |
| 0.2851 | 140 | 1.706 | - | - | - |
| 0.3055 | 150 | 1.4933 | - | - | - |
| 0.3259 | 160 | 1.5664 | - | - | - |
| 0.3462 | 170 | 1.4154 | - | - | - |
| 0.3666 | 180 | 1.5204 | - | - | - |
| 0.3870 | 190 | 1.3972 | - | - | - |
| 0.4073 | 200 | 1.3655 | - | - | - |
| 0.4277 | 210 | 1.4794 | - | - | - |
| 0.4481 | 220 | 1.3696 | - | - | - |
| 0.4684 | 230 | 1.2406 | - | - | - |
| 0.4888 | 240 | 1.2307 | - | - | - |
| 0.5092 | 250 | 1.2954 | - | - | - |
| 0.5295 | 260 | 1.3026 | - | - | - |
| 0.5499 | 270 | 1.2405 | - | - | - |
| 0.5703 | 280 | 1.237 | - | - | - |
| 0.5906 | 290 | 1.2187 | - | - | - |
| 0.6110 | 300 | 1.0976 | - | - | - |
| 0.6314 | 310 | 1.1978 | - | - | - |
| 0.6517 | 320 | 1.1269 | - | - | - |
| 0.6721 | 330 | 1.0751 | - | - | - |
| 0.6925 | 340 | 0.9505 | - | - | - |
| 0.7128 | 350 | 1.1134 | - | - | - |
| 0.7332 | 360 | 1.0918 | - | - | - |
| 0.7536 | 370 | 0.9427 | - | - | - |
| 0.7739 | 380 | 1.0422 | - | - | - |
| 0.7943 | 390 | 1.0331 | - | - | - |
| 0.8147 | 400 | 0.9231 | - | - | - |
| 0.8350 | 410 | 0.9498 | - | - | - |
| 0.8554 | 420 | 0.9848 | - | - | - |
| 0.8758 | 430 | 0.9296 | - | - | - |
| 0.8961 | 440 | 1.0072 | - | - | - |
| 0.9165 | 450 | 0.8376 | - | - | - |
| 0.9369 | 460 | 0.935 | - | - | - |
| 0.9572 | 470 | 0.7741 | - | - | - |
| 0.9776 | 480 | 0.8884 | - | - | - |
| 0.9980 | 490 | 0.8525 | - | - | - |
| 1.0 | 491 | - | 0.5987 | 0.5911 | 0.5708 |
| 1.0183 | 500 | 0.732 | - | - | - |
| 1.0387 | 510 | 0.6374 | - | - | - |
| 1.0591 | 520 | 0.642 | - | - | - |
| 1.0794 | 530 | 0.651 | - | - | - |
| 1.0998 | 540 | 0.5616 | - | - | - |
| 1.1202 | 550 | 0.6751 | - | - | - |
| 1.1405 | 560 | 0.5725 | - | - | - |
| 1.1609 | 570 | 0.6779 | - | - | - |
| 1.1813 | 580 | 0.6126 | - | - | - |
| 1.2016 | 590 | 0.6333 | - | - | - |
| 1.2220 | 600 | 0.6144 | - | - | - |
| 1.2424 | 610 | 0.5795 | - | - | - |
| 1.2627 | 620 | 0.6421 | - | - | - |
| 1.2831 | 630 | 0.6581 | - | - | - |
| 1.3035 | 640 | 0.5602 | - | - | - |
| 1.3238 | 650 | 0.5432 | - | - | - |
| 1.3442 | 660 | 0.5863 | - | - | - |
| 1.3646 | 670 | 0.6106 | - | - | - |
| 1.3849 | 680 | 0.5948 | - | - | - |
| 1.4053 | 690 | 0.6029 | - | - | - |
| 1.4257 | 700 | 0.5935 | - | - | - |
| 1.4460 | 710 | 0.4886 | - | - | - |
| 1.4664 | 720 | 0.5825 | - | - | - |
| 1.4868 | 730 | 0.5732 | - | - | - |
| 1.5071 | 740 | 0.4534 | - | - | - |
| 1.5275 | 750 | 0.5513 | - | - | - |
| 1.5479 | 760 | 0.5679 | - | - | - |
| 1.5682 | 770 | 0.6028 | - | - | - |
| 1.5886 | 780 | 0.5002 | - | - | - |
| 1.6090 | 790 | 0.5362 | - | - | - |
| 1.6293 | 800 | 0.5899 | - | - | - |
| 1.6497 | 810 | 0.5807 | - | - | - |
| 1.6701 | 820 | 0.4952 | - | - | - |
| 1.6904 | 830 | 0.5656 | - | - | - |
| 1.7108 | 840 | 0.5707 | - | - | - |
| 1.7312 | 850 | 0.5203 | - | - | - |
| 1.7515 | 860 | 0.5939 | - | - | - |
| 1.7719 | 870 | 0.447 | - | - | - |
| 1.7923 | 880 | 0.493 | - | - | - |
| 1.8126 | 890 | 0.4972 | - | - | - |
| 1.8330 | 900 | 0.5028 | - | - | - |
| 1.8534 | 910 | 0.6546 | - | - | - |
| 1.8737 | 920 | 0.5032 | - | - | - |
| 1.8941 | 930 | 0.5613 | - | - | - |
| 1.9145 | 940 | 0.4963 | - | - | - |
| 1.9348 | 950 | 0.512 | - | - | - |
| 1.9552 | 960 | 0.5305 | - | - | - |
| 1.9756 | 970 | 0.5705 | - | - | - |
| 1.9959 | 980 | 0.4803 | - | - | - |
| 2.0 | 982 | - | 0.6536 | 0.6459 | 0.6266 |
| 2.0163 | 990 | 0.3358 | - | - | - |
| 2.0367 | 1000 | 0.2909 | - | - | - |
| 2.0570 | 1010 | 0.3514 | - | - | - |
| 2.0774 | 1020 | 0.3245 | - | - | - |
| 2.0978 | 1030 | 0.2989 | - | - | - |
| 2.1181 | 1040 | 0.3315 | - | - | - |
| 2.1385 | 1050 | 0.3156 | - | - | - |
| 2.1589 | 1060 | 0.3156 | - | - | - |
| 2.1792 | 1070 | 0.2989 | - | - | - |
| 2.1996 | 1080 | 0.2878 | - | - | - |
| 2.2200 | 1090 | 0.327 | - | - | - |
| 2.2403 | 1100 | 0.3278 | - | - | - |
| 2.2607 | 1110 | 0.3013 | - | - | - |
| 2.2811 | 1120 | 0.3342 | - | - | - |
| 2.3014 | 1130 | 0.2753 | - | - | - |
| 2.3218 | 1140 | 0.3339 | - | - | - |
| 2.3422 | 1150 | 0.3394 | - | - | - |
| 2.3625 | 1160 | 0.3122 | - | - | - |
| 2.3829 | 1170 | 0.3322 | - | - | - |
| 2.4033 | 1180 | 0.3354 | - | - | - |
| 2.4236 | 1190 | 0.2837 | - | - | - |
| 2.4440 | 1200 | 0.3303 | - | - | - |
| 2.4644 | 1210 | 0.3136 | - | - | - |
| 2.4847 | 1220 | 0.2824 | - | - | - |
| 2.5051 | 1230 | 0.3052 | - | - | - |
| 2.5255 | 1240 | 0.3075 | - | - | - |
| 2.5458 | 1250 | 0.3123 | - | - | - |
| 2.5662 | 1260 | 0.3347 | - | - | - |
| 2.5866 | 1270 | 0.2738 | - | - | - |
| 2.6069 | 1280 | 0.3219 | - | - | - |
| 2.6273 | 1290 | 0.3354 | - | - | - |
| 2.6477 | 1300 | 0.3056 | - | - | - |
| 2.6680 | 1310 | 0.3494 | - | - | - |
| 2.6884 | 1320 | 0.3286 | - | - | - |
| 2.7088 | 1330 | 0.2915 | - | - | - |
| 2.7291 | 1340 | 0.3158 | - | - | - |
| 2.7495 | 1350 | 0.3012 | - | - | - |
| 2.7699 | 1360 | 0.2816 | - | - | - |
| 2.7902 | 1370 | 0.2399 | - | - | - |
| 2.8106 | 1380 | 0.2449 | - | - | - |
| 2.8310 | 1390 | 0.3156 | - | - | - |
| 2.8513 | 1400 | 0.2994 | - | - | - |
| 2.8717 | 1410 | 0.335 | - | - | - |
| 2.8921 | 1420 | 0.33 | - | - | - |
| 2.9124 | 1430 | 0.2825 | - | - | - |
| 2.9328 | 1440 | 0.2933 | - | - | - |
| 2.9532 | 1450 | 0.2607 | - | - | - |
| 2.9735 | 1460 | 0.3466 | - | - | - |
| 2.9939 | 1470 | 0.3064 | - | - | - |
| 3.0 | 1473 | - | 0.6750 | 0.6674 | 0.6555 |
| 3.0143 | 1480 | 0.2303 | - | - | - |
| 3.0346 | 1490 | 0.207 | - | - | - |
| 3.0550 | 1500 | 0.2116 | - | - | - |
| 3.0754 | 1510 | 0.2179 | - | - | - |
| 3.0957 | 1520 | 0.204 | - | - | - |
| 3.1161 | 1530 | 0.1995 | - | - | - |
| 3.1365 | 1540 | 0.2077 | - | - | - |
| 3.1568 | 1550 | 0.2089 | - | - | - |
| 3.1772 | 1560 | 0.2135 | - | - | - |
| 3.1976 | 1570 | 0.2098 | - | - | - |
| 3.2179 | 1580 | 0.181 | - | - | - |
| 3.2383 | 1590 | 0.2247 | - | - | - |
| 3.2587 | 1600 | 0.2053 | - | - | - |
| 3.2790 | 1610 | 0.2124 | - | - | - |
| 3.2994 | 1620 | 0.231 | - | - | - |
| 3.3198 | 1630 | 0.189 | - | - | - |
| 3.3401 | 1640 | 0.2111 | - | - | - |
| 3.3605 | 1650 | 0.2238 | - | - | - |
| 3.3809 | 1660 | 0.1892 | - | - | - |
| 3.4012 | 1670 | 0.2265 | - | - | - |
| 3.4216 | 1680 | 0.2129 | - | - | - |
| 3.4420 | 1690 | 0.1952 | - | - | - |
| 3.4623 | 1700 | 0.1961 | - | - | - |
| 3.4827 | 1710 | 0.1774 | - | - | - |
| 3.5031 | 1720 | 0.205 | - | - | - |
| 3.5234 | 1730 | 0.2328 | - | - | - |
| 3.5438 | 1740 | 0.2092 | - | - | - |
| 3.5642 | 1750 | 0.2347 | - | - | - |
| 3.5845 | 1760 | 0.1783 | - | - | - |
| 3.6049 | 1770 | 0.2023 | - | - | - |
| 3.6253 | 1780 | 0.1829 | - | - | - |
| 3.6456 | 1790 | 0.2084 | - | - | - |
| 3.6660 | 1800 | 0.2091 | - | - | - |
| 3.6864 | 1810 | 0.2172 | - | - | - |
| 3.7067 | 1820 | 0.1661 | - | - | - |
| 3.7271 | 1830 | 0.1854 | - | - | - |
| 3.7475 | 1840 | 0.1963 | - | - | - |
| 3.7678 | 1850 | 0.2228 | - | - | - |
| 3.7882 | 1860 | 0.216 | - | - | - |
| 3.8086 | 1870 | 0.1956 | - | - | - |
| 3.8289 | 1880 | 0.2019 | - | - | - |
| 3.8493 | 1890 | 0.1932 | - | - | - |
| 3.8697 | 1900 | 0.2192 | - | - | - |
| 3.8900 | 1910 | 0.2275 | - | - | - |
| 3.9104 | 1920 | 0.1971 | - | - | - |
| 3.9308 | 1930 | 0.1662 | - | - | - |
| 3.9511 | 1940 | 0.1979 | - | - | - |
| 3.9715 | 1950 | 0.1531 | - | - | - |
| 3.9919 | 1960 | 0.223 | - | - | - |
| 4.0 | 1964 | - | 0.6803 | 0.6742 | 0.6616 |
| 4.0122 | 1970 | 0.1989 | - | - | - |
| 4.0326 | 1980 | 0.164 | - | - | - |
| 4.0530 | 1990 | 0.1588 | - | - | - |
| 4.0733 | 2000 | 0.1683 | - | - | - |
| 4.0937 | 2010 | 0.1672 | - | - | - |
| 4.1141 | 2020 | 0.1677 | - | - | - |
| 4.1344 | 2030 | 0.1603 | - | - | - |
| 4.1548 | 2040 | 0.1973 | - | - | - |
| 4.1752 | 2050 | 0.147 | - | - | - |
| 4.1955 | 2060 | 0.1718 | - | - | - |
| 4.2159 | 2070 | 0.1647 | - | - | - |
| 4.2363 | 2080 | 0.1818 | - | - | - |
| 4.2566 | 2090 | 0.1762 | - | - | - |
| 4.2770 | 2100 | 0.1645 | - | - | - |
| 4.2974 | 2110 | 0.1407 | - | - | - |
| 4.3177 | 2120 | 0.1444 | - | - | - |
| 4.3381 | 2130 | 0.1721 | - | - | - |
| 4.3585 | 2140 | 0.175 | - | - | - |
| 4.3788 | 2150 | 0.1755 | - | - | - |
| 4.3992 | 2160 | 0.1959 | - | - | - |
| 4.4196 | 2170 | 0.1908 | - | - | - |
| 4.4399 | 2180 | 0.1834 | - | - | - |
| 4.4603 | 2190 | 0.155 | - | - | - |
| 4.4807 | 2200 | 0.1551 | - | - | - |
| 4.5010 | 2210 | 0.1785 | - | - | - |
| 4.5214 | 2220 | 0.1305 | - | - | - |
| 4.5418 | 2230 | 0.1573 | - | - | - |
| 4.5621 | 2240 | 0.1659 | - | - | - |
| 4.5825 | 2250 | 0.1622 | - | - | - |
| 4.6029 | 2260 | 0.1529 | - | - | - |
| 4.6232 | 2270 | 0.1792 | - | - | - |
| 4.6436 | 2280 | 0.1764 | - | - | - |
| 4.6640 | 2290 | 0.1327 | - | - | - |
| 4.6843 | 2300 | 0.1463 | - | - | - |
| 4.7047 | 2310 | 0.1783 | - | - | - |
| 4.7251 | 2320 | 0.175 | - | - | - |
| 4.7454 | 2330 | 0.156 | - | - | - |
| 4.7658 | 2340 | 0.1682 | - | - | - |
| 4.7862 | 2350 | 0.1815 | - | - | - |
| 4.8065 | 2360 | 0.1829 | - | - | - |
| 4.8269 | 2370 | 0.1742 | - | - | - |
| 4.8473 | 2380 | 0.1425 | - | - | - |
| 4.8676 | 2390 | 0.1565 | - | - | - |
| 4.8880 | 2400 | 0.1941 | - | - | - |
| 4.9084 | 2410 | 0.1721 | - | - | - |
| 4.9287 | 2420 | 0.1767 | - | - | - |
| 4.9491 | 2430 | 0.166 | - | - | - |
| 4.9695 | 2440 | 0.1776 | - | - | - |
| 4.9898 | 2450 | 0.1628 | - | - | - |
| **5.0** | **2455** | **-** | **0.681** | **0.6755** | **0.6627** |
* The bold row denotes the saved checkpoint.
</details>
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.6.0+cu124
- Accelerate: 1.2.1
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "rasyosef/bert-small-amharic", "language": ["en"], "library_name": "sentence-transformers", "license": "apache-2.0", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_recall@1", "cosine_recall@3", "cosine_recall@5", "cosine_recall@10", "cosine_ndcg@10", "cosine_mrr@10", "cosine_map@100"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:62833", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "በናይጀሪያ 11ሚሊየን ህዝብ የከፋ የምግብ እጥረት እንደሚያጋጥማቸው ድርጅቱ አስጠነቀቀ", "sentences": ["በናይጀሪያ 11 ሚሊየን ህዝብ እጅግ የከፋ የምግብ እጥረት እንደሚያጋጥማቸዉ የዓለም የምግብ እና የእርሻ ድርጅት አስጠነቀቀ ፡፡ድርጅቱ የምርት ወቅት በሆነዉ ከሰኔ እስከ ነሐሴ ባሉት ጊዜያት በሰሜናዊ ናይጀሪያ የሚገኙ አካባቢዎች ለዚህ ችግር እንደሚጋለጡ ይጠበቃል ነው ያለው ።በዚህ ክፉኛ ሊጠቁ እንደሚችሉ ከሚገመቱት ቦታዎችም ቦኮ ሃራም የተመሠረተባት ቦርኖ 65 በመቶ የሚሆነዉ ርሃብ የሚያሰጋዉ ዜጋ የሚገኝባት ግዛት ናት ።ቦኮ ሃራም በፈጠረዉ ያለመረጋጋት ምክንያት 120 ሺ የሀገሪቱ ዜጎች አደገኛ ለሆነ የረሃብ አደጋ እንዲጋለጡ ማድረጉን ጨምሮ አመልክቷል ።የተባበሩት መንግስታት ድርጅቶች በበኩሉ እንደገለጸው ፤በዚህ ስፍራ ልጆች እየሞቱ ስለሆነ አፋጣኝ እርዳታ ካላገኙም በሚሊየን የሚገመቱት ይኸዉ ዕጣ ያሰጋቸዋል።ሙስና እና በመንግሥት እና በእርዳታ ድርጅቶች መካከል ያለዉ ዉዝግብም ችግሩን እያወሳሰበዉ መሆኑም ተገልጿል።ባለስልጣናት የአካባቢዉ መንግሥታት የእርዳታ እህሉን ይሰርቃሉ የሚለዉን ክስ እያጣሩ መሆኑን አሶሼየትድ ፕረስ ዘግቧል።የናይጀሪያ መንግሥት ለእርሻ የሚያደርገዉን ድጋፍ ከፍ ማደርጉን ቢገልጽም፤ ሀገሪቷ የምግብ እጥረት ያለባት መሆኗን ነው የተመለከተው ።", "የተባበሩት መንግሥታት ድርጅት ዋና ጸሐፊ አንቶንዮ ጉተሬዥ ትላንት በኒው ዮርክ ከኢትዮጵያ ፌዴራላዊ ዲሞክራሲያዊ ሪፖብሊክ ውጭ ጉዳይ ሚኒስትር ዶ/ር ወርቅነህ ገበየሁ ጋር ተገናኝተው መነጋገራቸውን የደርጅቱ ቃል አቀባይ አስታወቁ።ዋና ጸሐፊ ጉተሬዥ እና ውጭ ጉዳይ ሚኒስትር ወርቅነህ ገበየሁ በዚሁ በውይይታቸው በአሁኑ ወቅት በአፍሪካ ቀንድ እየተካሄዱ ስላሉ አዎንታዊ ለውጦችና በአካባቢው የቀሩትን ፈታኝ ችግሮች ማስወገድ በሚቻሉባቸው መንገዶች ላይ ሃሳብ ተለዋውጠዋል። በቅርቡ በኢትዮጵያና ኤርትራ መካከል የሰላም ሥምምነት መፈረሙን በደስታ እንደተቀበሉት ገልፀው ለአካባቢው ሰላምና መረጋጋት ጠቃሚ ነው ብለዋል።የተባበሩት መንግሥታት ድርጅት ዋና ጸሐፊ አንቶኒዮ ጉተሬዥ የዓለሙ ድርጅት ለእነዚህ በጎ ጥረቶችና ኢትዮጵያ በጂቡቲና ኤርትራ መካከል የሰላም ንግግር እንዲጀመር በማመቻቸቷ ሙሉ ድጋፍ እንደሚስጥ በድጋሚ አረጋግጠዋል።", "የኢትዮጵያ ከፍተኛ ሊግ ምድብ ለ የሁለተኛ የውድድር ዘመን አጋማሽ ጨዋታዎች ቅዳሜ ተደርገው ሀላባ ከተማ መሪነቱን ሲያጠናክር ደቡብ ፖሊስ እና ጅማ አባ ቡና ደረጃቸውን ያሻሻሉበትን ድል አስመዝግበዋል።ጅማ አባ ቡና 3-0 ቤንች ማጂ ቡና(በቴዎድሮስ ታደሰ)ጅማ ስታድየም ላይ ቤንች ማጂ ቡናን ያስተናገደው ጅማ አባቡናን 3-0 በማሸነፍ ደረጃውን አሻሽሏል። ጨዋታው ሳቢ ያልነበረ ሲሆን ቤንች ማጂዎች ሙሉ ለሙሉ መከላከል እና ጉልበት ላይ ያመዘነ ጨዋታን ተከትለው ለመጫወት ተገደዋል። አባቡናዎች ምንም እንኳን የመጀመርያውን አጋማሽ በተመስገን ደረሰ 34ኛው ደቂቃ ባስቆጠራት ጎል በመምራት ቢያጠናቅቁም በእንቅስቃሴ ረገድ ባልተሳኩ ቅብብሎች እና ያልተደራጀ የጨዋታ እንቅስቃሴ እንዲሁም የቤንች ማጂን የተከላካይ መስመር ማለፍ ሲቸገሩ ተመልክተናል።\nከእረፍት መልስ በይበልጥ በሽኩቻዎች ታጅቦ ቤንች ማጂ ተጫዋቾች እያንዳንዱን የዳኛ ውሳኔ ሲቃወሙ እና አላስፈላጊ ንትርክ ውስጥ ሲገቡ ተስተውለዋል። በ50ኛው ደቂቃ ብዙዓየሁ እንደሻው አባቡናን መሪነት ወደ 2-0 መሪነት ከፍ ያደረገች ግብ ካስቆጠረ በኋላ ከመጀመርያው አጋማሽ አንፃር አባቡናዎች ወደግብ ቶሎ ቶሎ በመድረስና የኳስ ቁጥጥር ከመጀመርያው አጋማሽ በተሻለ ሁኔታ መንቀሳቀስ ችለዋል። በ68ኛው ደቂቃ ብዙአየሁ እንደሻው ፍፁም ቅጣት ምት ክልል ውስጥ መጠለፉን ተከትሎ የመሀል ዳኛው የሰጡትን ፍፁም ቅጣት ምት በመቃወም ከዳኛው ጋር ግብግብ የገጠሙት የቤንች ማጂ ተጫዋቾች ጌታሁን ገላዬ እና አበራ አየለ ከሜዳ በቀይ ካርድ እንዲወጡ ተደርጓል። በሁኔታው ጨዋታው ለ10 ደቂቃዎች ተቋርጦ ከቀጠለ በኋላ የተሰጠውን ፍፁም ቅጣት ምት ብዙአየሁ አስቆጥሮ አባቡናን 3-0 በሆነ ውጤት ማሸነፍ ያስቻለውን ውጤት አስመዝግቧል።ከጨዋታው መጠናቀቅ በኋላ ቤንች ማጂ የቡድን አባላት ሜዳ በመግባት የእለቱ አልቢትሮች ላይ ጥቃት ለመፈፀም ቢሞክሩም በእለቱ በነበሩበት የፀጥታ አካላት ርብርብ አርቢትሮች ላይ ጉዳት ሳይደርስ የቀረ ሲሆን የግብ ጠባቂው አሰልጣኝ ታፈሰ አጃ ቀይ ካርድ ተመልክቷል። ከሁሉም በላይ አስገራሚው ግብ ጠባቂው አብዱልሃፊዝ መኪ ከዳኛው አልፍ ተርፎ ፀጥታ በማስከበርና ለዳኞች ከለላ ለሰጠው የኮማንድ ፖስት አባል የሆነው የፌዴራል ፖሊስ ላይ ጥቃት በመሰንዘሩ ነገሮች አቅጣጫዎችን በመቀየር ጨዋታው ከተጠናቀቀ በኃላ ለረጅም ደቂቃዎች በፀጥታ አካላት እና በእለቱ የጨዋታ ኮሚሽነር ዩሀንስ ስለሺ አሸማጋይነት ከቆዩ በኃላ በኮሚሽነሩ ጥረትና የፀጥታ አስከባሪ አካላት ሁኔታውን በማብረዳቸው በህግ ቁጥጥር ስር ሳይውል ቀርቷል፡፡\nሌሎች ጨዋታዎች(በአምሀ ተስፋዬ)ሀላባ ላይ በ10:00 ድሬዳዋ ፖሊስን ያስተናገደው ሀላባ ከተማ በ23ኛው ደቂቃ ስንታየሁ መንግስቱ በፍፁም ቅጣት ምት ባስቆጠረው ጎል 1-0 አሸንፏል። በዚህም ውጤት መሰረት ሀላባ ከተማ በ31 ነጥቦች የምድብ ለ መሪነትን ማጠናከር ችሏል።\nደቡብ ፖሊስ በሜዳው በግብ መንበሽበሹን ቀጥሎ ከመቂ ከተማ ጋር ባደረገው ጨዋታ 4-0 በሆነ ውጤት ማሸነፍ ችሏል። በ10ኛው ደቂቃ ብሩክ ኤልያስ ቀዳሚውን ጎል ሲያስቆጥር በ18ኛው ደቂቃ አበባየው ዮሐንስ ሁለተኛውን ግብ አስቆጥሮ ወደ ዕረፍት አምርተዋል። በ53ኛው ደቂቃ ኤሪክ ሙራንዳ በፍፁም ቅጣት ምት 3ኛውን ሲያክል በ82ኛው ደቂቃ አራተኛ ግብ አስቆጥረዋል። በዚህ ውጤት መሰረት ደቡብ ፖሊስ ደረጃውን ወደ ሁለተኛ ከፍ ማድረግ ችሏል።\nቦንጋ ላይ ካፋ ቡና ሻሸመኔ ከተማን አስተናግዶ በሀቁ ምንይሁን ገዛኸኝ የ5ኛው ደቂቃ ብቸኛ ግብ 1-0 ማሸነፍ ችሎል። ነገሌ ላይ ነገሌ ከተማ ቡታጅራ ከተማን 1-0 ሲያሸንፍ ድሬዳዋ ላይ ረፋድ 04:00 ናሽናል ሴሜንት በሳሙኤል ዘሪሁን ጎሎች ወልቂጤ ከተማን 2-1 መርታት ችሏል። በደረጃ ሰንጠረዡ ሁለተኛ ላይ የነበረው ዲላ ከተማ ደግሞ ወደ ዱራሜ ተጎዞ ከሀምበሪቾ ጋር ያለጎል አቻ በመለያየት ከደረጃው ለመንሸራተት ተገዷል።"]}, {"source_sentence": "በኮሮና ቫይረስ ሳቢያ በአፍሪካ እስከ 190 ሺህ ሰዎች በመጀመሪያው ዓመት ህይዎታቸው ሊያልፍ ይችላል – የዓለም ጤና ድርጅት", "sentences": ["አዲስ አበባ ፣ ሚያዚያ 30 ፣ 2012 (ኤፍ ቢ ሲ) በአፍሪካ ከ83 ሺህ እስከ 190 ሺህ ሰዎች በኮሮና ቫይረስ ሳቢያ በመጀመሪያው ዓመት ብቻ ህይዎታቸው ሊያልፍ እንደሚችል የዓለም ጤና ድርጅት አስጠነቀቀ።ድርጅቱ ትናንት ባወጣው መግለጫ በአፍሪካ ሃገራት ቫይረሱን ለመከላከል የወጡ መመሪያና ህጎች ተግባራዊ እየተደረጉ አለመሆኑን እና እየተወሰዱ ያሉ እርምጃዎችም በቂ አለመሆናቸውን አስታውቋል።ቫይረሱ ወደ አህጉሪቱ ዘግይቶ ቢገባም ለበርካታ ጊዜ የሚቆይ ወረርሽኝ ሊሆን ይችላልም ነው ያለው።በተጨማሪም ቫይረሱን በፍጥነት በቁጥጥር ስር ማዋል ካልተቻለ ከ29 እስከ 44 ሚሊየን ሰዎችን ሊይዝ እንደሚችልም ነው ያስጠነቀቀው።የአሁኑ የድርጅቱ መረጃ በ47 የአፍሪካ ሀገራት የሰራውን ጥናት መሰረት አድርጎ የወጣ ነው ተብሏል።ከሃገራቱ መካከል ደቡብ አፍሪካ፣ አልጄሪያ እና ካሜሩን በቫይረሱ ክፉኛ ሊጠቁ የሚችሉ ሃገራት ይሆናሉ በሚልም ስጋቱን አስቀምጧል።በጥናቱ ግብጽ፣ ሊቢያ፣ ቱኒዚያ፣ ሞሮኮ፣ ኤርትራ፣ ሱዳን፣ ሶማሊያ እና ጂቡቲ አልተካተቱም።በአፍሪካ በኮሮና ቫይረስ ሳቢያ በ47 ሃገራት ከ35 ሺህ በላይ ሰዎች ሲያዙ፥ ከ1 ሺህ 200 በላይ ደግሞ ሕይወታቸው ማለፉን ድርጅቱ ገልጿል።ምንጭ፦ ቢቢሲየዜና ሰዓት ሳይጠብቁ የፋና ብሮድካስቲንግ ኮርፖሬትን ትኩስ እና ሰበር ዜናዎችን በፍጥነት በአጭር የፅሁፍ መልዕክት መልክ በስልክዎ ላይ እንዲደርስዎ ወደ 8111 OK ብለው ይላኩ።", "በወልቂጤ ከተማ ከሳምንታት በፊት ውሉን ለማራዘም ተስማምቶ የነበረው የመስመር አጥቂው ጫላ ተሺታ አሁን ደግሞ ለቀድሞ ክለቡ ሲዳማ ቡና ለመፈረም ተስማማ፡፡የቀድሞው የሻሸመኔ እና ሰበታ ከተማ ተጫዋች ለኢትዮጵያ ከ17 ዓመት በታች ብሔራዊ ቡድን ተሰልፎ ከተጫወተ በኃላ ነበር 2010 ላይ ወደ ሲዳማ ቡና ተቀላቅሎ መጫወት የጀመረው። ተጫዋቹ በውሰት ለአዳማ ከተማም በመሄድ የተጫወተ ሲሆን በ2011 በሲዳማ ቤት ባሳየው መልካም እንቅስቃሴ መነሻነት ዘንድሮ ወደ ወልቂጤ ሄዶ ከክለቡ ጋር መልካም የውድድር ዓመትን ሊጉ እስከተቋረጠበት ጊዜ ድረስ ያሳየ ሲሆን ውሉ በመጠናቀቁ ከሳምንታት በፊት በወልቂጤ ለመቀጠል ቢስማማም በድጋሚ ለቀድሞው ክለቡ ሲዳማ የሁለት ዓመት ውል ለመፈረም ተስማምቷል፡፡", "ማክሰኞ መጋቢት 15 ቀን 2012 ዓ.ም. ማለዳ ከማላዊ ወደ ሞዛምቢክ ኢትዮጵያዊያንን ጭኖ ሲጓዝ በነበረ የጭነት ተሽከርካሪ ኮንቴይነር ውስጥ፣ 64 ኢትዮጵያዊያን ሞተው መገኘታቸውን የሞዛምቢክ ባለሥልጣናት አስታወቁ፡፡ ሰነድ አልባ ስደተኞቹ ወደ ደቡብ አፍሪካ ለመሸጋገር ጉዞ የጀመሩ እንደሆኑ ታውቋል፡፡በስደት ወደ ደቡብ አፍሪካ ለመሄድ የጭነት ተሽከርካሪውን ተሳፍረው የነበሩት 78 ኢትዮጵያውያን እንደነበሩ፣ 14 ኢትዮጵያውያን ግን በሕይወት መገኘታቻው ተሰምቷል፡፡ ለ64 ኢትዮጵያውያን ሕይወት ማለፍ ምክንያቱ በአየር ማጣት መታፈን ሊሆን እንደሚችል ተገልጿል፡፡ በሞዛምቢክ ሰሜናዊ ምዕራብ ቴቴ በተባለ ሥፍራ በሚገኝ ሙሳካና በተባለ የክብደት መመዘኛ ጣቢያ ተሽከርካሪው በኢሚግሬሽን ሠራተኞች እንዲቆም ተደርጎ ፍተሻ ሲካሄድ፣ 64 ሟቾች በሕይወት ከተረፉት ጋር መገኘታቸውን አዣንስ ፍራንስ ፕሬስ የሆስፒታል ምንጭ ጠቅሶ ዘግቧል፡፡የሞዛምቢክ ሰሜን ምዕራብ ግዛት ባለሥልጣናት ከጭነት ተሽከርካሪው ላይ አስከሬናቸው የተገኙ ኢትዮጵያውያን፣ ሕይወታቸው በምን ምክንያት እንዳለፈ ምርመራ እየተደረገ መሆኑን መግለጻቸው ታውቋል። በሌላ በኩል ኢትዮጵያውያኑ ሕይወታቸው ያለፈው በኮንቴይነር ውስጥ ታፍነው ሊሆን እንደሚችል በሞዛምቢክ ባለሥልጣናት የተገለጸ ሲሆን፣ የሞዛምቢክ ብሔራዊ ኢሚግሬሽን አገልግሎት ቃል አቀባይ አሜሊያ ደሪዬሮ ሟቾቹን አሳፍሮ የነበረው የጭነት ተሽከርካሪ ሾፌር እንዲቆም ሲጠየቅ ፈቃደኛ እንዳልነበረ መናገራቸውን ቢቢሲ ዘግቧል።እንደ ቃል አቀባይዋ ማብራሪያ የኢሚግሬሽን ሠራተኞች በተሽከርካሪው ውስጥ ድምፅ በመስማታቸው፣ ስደተኞች በውስጡ ሊኖሩ ይችላሉ የሚል ጥርጣሬ አድሮባቸው እንዲቆም ሲያደርጉ ሟቾችን አግኝተዋል።ወደ ደቡብ አፍሪካ በስደት የሚጓዙ ሰነድ አልባ ስደተኛ ኢትዮጵያዊያን ኬንያን፣ ታንዛኒያን፣ ማላዊና ሞዛምቢክን ለመሸጋገሪያነት የሚጠቀሙባቸው እንደሆነ፣ የዛምቢያ ሰሜን ምዕራብ ግዛትም ዋናው የሰነድ አልባ ስደተኞች መተላለፊያ መሆኑን የተለያዩ ሚዲያዎች ዘገባዎች ያመለክታሉ፡፡ በሕይወት የተረፉት 14 ኢትዮጵያዊያን የኮሮና ቫይረስ ምርመራ እንደ ተደረገላቸው ቃል አቀባይዋ መናገራቸውን ቢቢሲ በዘገባው አካቷል።"]}, {"source_sentence": "የአውሮፓና አፍሪካ መሪዎች የሊቢያን የባሪያ ንግድ በፍጥነት ለማስቆም ተስማሙ", "sentences": ["በትናንትናው ዕለት የትግራይ ቴሌቪዝንና ድምፂ ወያነ ቴሌቪዥን ጣብያዎች የሳተላይት ስርጭት መቋረጡ ታውቋል።የሁለቱ መገናኛ ብዙሃን ጣብያዎች ሥራ አስኪያጆች ጉዳዩን ለቢቢሲ ያረጋገጡ ሲሆን የትግራይ ቴሌቪዥን ጣቢያ ሥራ አስኪያጅ የሆኑት አቶ ተሻለ በቀለ የቴሌቪዥኑ ጣብያ የሳተላይት ስርጭት እንዲቋረጥ የተደረገው በኢትዮጵያ መንግሥት ነው ሲሉ ለቢቢሲ ተናግረዋል።\"መንግሥት የሳተላይት ስርጭቱ እንዲቋረጥ አድርጓል። ስለተፈጠረው ነገር ለማወቅ ሳተላይቱን ወደ አከራየን ድርጅት ስንደውል የኢትዮጵያ መንግሥትና የፈረንሳይ መንግሥት ተነጋግረው በሰጡት ትዕዛዝ መሰረት አቋርጠነዋል የሚል ምላሽ ነው የሰጡን፤ ምክንያቱን ስንጠይቅም የፈረንሳይ መንግሥት ያለውን መፈፀም አለብን ነው ያሉት።\" በጉዳዩ ላይ የኢትዮጵያ ብሮድካስት ባለስልጣን ምክትል ሥራ አስኪያጅ አቶ ወንድወሰን አንዷለም ለቢቢሲ በሰጡት ቃል \"እስካሁን ባለኝ መረጃ የኢትዮጵያ ብሮድካስት ባለስልጣን ሚዲያዎችን የመዝጋት እርምጃ አልወሰደም \"ብለዋል።የትግራይ ቴሌቪዥን ሥራ አስኪያጅ ስርጭታቸው እስከ መቼ ድረስ ተቋርጦ እንደሚቀጥል ያውቁ እንደሆን ተጠይቀው ይህንን ጥያቄ ስርጭቱን ላቋረጠው ድርጅት ማቅረባቸውን ይገልፃሉ።እነርሱም እስከመቼ ድረስ እንደተቋረጠ እንደማያውቁ እና \"ከኢትዮጵያ መንግሥት ጋር ተነጋገሩ\" ማለታቸውን ገልፀዋል።ሥራ አስኪያጁ አቶ ተሻለ አክለውም ወደ ኢትዮጵያ ብሮድካስት ባለስልጣን መደወላቸውንና ስልካቸውን የሚመልስላቸው አለማግኘታቸውን ተናግረዋል።አቶ ተሻለ ለቢቢሲ ጨምረው እንደተናገሩት መቋረጡን የሚገልጽ ደብዳቤ ከድርጅቱ አለመጻፉን ለቢቢሲ አረጋግጠዋል። በትግራይ ክልል የሚገኘው የቢቢሲ ሪፖርተር የሁለቱ ቴሌቪዥን ጣቢያዎች ስርጭት እየተላለፈ አለመሆኑን አረጋግጧል።አቶ አበበ አስገዶም፣ የድምፂ ወያነ ቴሌቪዥን ጣብያ ሥራ አስኪያጅ ለሪፖርተራችን እንዳረጋገጡት፣ በበኩላቸው ስርጭታቸው እንደተቋረጠ ፈረንሳይ አገር ወደ ሚገኘው የሳተላይት አገልግሎት አቅራቢ ድርጅት መደወላቸውንና የቴክኒክ ክፍል ኃላፊው የኢትዮጵያ መንግሥት እንዲቋረጥ ማድረጉን እንደነገራቸው ገልፀዋል።ባለፈው ሳምንት ድምፂ ወያነ በአዲስ አበባ የሚገኘው ቢሮው አቃቤ ሕግ በሚያደርግበት ምርመራ የተነሳ ፍተሻ እንደተደረገለት መዘገቡ ይታወሳል።የብሮድካስት ባለስልጣን ምክትል ሥራ አስኪያጅ አቶ ወንድወሰን እነዚህ ሁለት ድርጅቶች የእርምት እርምጃ እንዲወስዱ ጥብቅ ማስጠንቀቂያ መጻፋቸውን አስታውሰዋል።ቢሆንም ግን የጣቢያዎቹ ስርጭት እንዲቋረጥ ባለስልጣኑ የወሰደው ምንም አይነት እርምጃ እንደሌለ ለቢቢሲ ገልጸዋል። ", "የአውሮፓ እና አፍሪካ መሪወች የሊቢያ የባሪያ ንግድን አስመልክቶ በአይቪሪኮስት ባካሄዱት ስብሰባ ስደተኞችን ከሊቢያ በፍጥነት ለማስቆም በሚያስችላቸው መንገድ ላይ ከስምምነት ላይ ደረሱ ፡፡የአውሮፓ ህብረት ፣የአፍሪካ ህብረት እና የተባበሩት መንግስታት ድርጅት የሚታወቁ ህገወጥ የሰዎች አዘዋዋሪዎች ንብረት እንዳይንቀሳቀስ እንዲሁም ማንኛውም የፋይናንስ ምንጫቸው ሙሉ በሙሉ እንዲቋረጥ ውሳኔ አሳልፈዋል፡፡በዚህ ዘመን የባሪያ ንግድ በሊቢያ ይካሄዳል ሲባል የሰማ የዓለም ህዝብ ቁጣውን ገልጿል፡፡ ለጉዳዩ ፈጣን ምላሽ በመሥጠት ችግሩን ለመቅረፍ የአውሮፓ እና አፍሪካ ሀገራት መሪዎች በ ኮቲዲቫር ባደረጉት ስብሰባ ስደተኞቹ ከሊቢያ ባፋጣኝ እንዲወጡ ይደረጋል ብለዋል፡፡በዚህም ስምምነት የአውሮፓ ህብረት ፣የአፍሪካ ህብረት እና የተባበሩት መንግስታት ድርጅት በጋራ የአስቸኳይ እቅድ በማውጣት የህገወጥ አዘዋዋሪዎችን መረብ በመበጣጠስ አደገኛ የሰብአዊ ቀውስ ላይ ያሉ በሊቢያ የሚገኙ ስደተኞችን የመታደግ ተልዕኮ አንግበዋል ተብሏል፡፡የአውሮፓዊያን ህብረት ምክር ቤት ፕሬዝዳንት ዶናልድ ቱስክ የባሪያ ንግዱ ሪፖርት በስብሰባዎች ወቅት ያሰማን መረጃዎች የሚያስደነግጡ ነበሩ በማለት ለችግሩ እልባት ለማምጣት የአፍሪካ እና አውሮፓ መንግስታት ንቁ በሆነ መልኩ በጋራ መስራት ይጠበቅባቸዋል ሲሉ መልክታቸውን አስተላልፈዋል፡፡የፈረንሳዩ ፕሬዝዳንት ኢማኑኤል ማክሮን ስደተኞችን ለመታደግ በሚቋቋመው ግብረ ኃይል የአፍሪካ እና አውሮፓ ፖሊሶች ተሳታፊ ይሆናሉ ብለዋል፡፡ፕሬዝዳንት ኢማኑኤል ማክሮን ስደተኞችን በፍጥነት ለማውጣት በሚደረገው ተልዕኮ በአጭር ጊዜ ውስጥ ህገወጥ አዘዋዋሪዎች በቁጥጥር ሥር ለማዋል፣ የባሪያ ሽያጭ የሚካሄድበት ድብቅ ትስስራቸውን ማቋረጥ እንዲሁም በተለዩ ባንኮች የሚያደርጉትን የባንክ ገንዘብ ዝውውር ማስቀረት ቅድሚያ ከሚሰሩ ጉዳዮች መካከል መሆናቸውን አብራርተዋል፡፡በአምስተኛው የአውሮፓ ህበረት እና አፍሪካ ህበረት የአቢጃን ስብሰባ ላይ የቀረበው እቅድ በዋናነት ቻድ ኒጀር ሞሮኮ ኮንጎ እና ሊቢያ ያረቀቁት ሲሆን የፈረንሳይም እጅ አለበት፡፡አውሮፓ ለመግባት በሚያደርጉት ጥረት ሊቢያ ላይ ለባሪያ ንግዱ እየተጋለጡ ያሉትም አብዛኛው ከምዕራብ አፍሪካ ሀገራት የሚነሱ ስደተኞች ቢሆኑም መነሻቸውን ከሌሎች የአፍሪካ ቀጠናዎች ያደረጉም ቁጥራቸው በቀላል የሚገመት እንዳልሆነ ዘገባዎች ያመላክታሉ፡፡እስካሁን ስደተኞችን ለመመለስ በተሠራው የሀገራቱ ሥራ ናይጄሪያ ከ240 በላይ ዜጎቿን ከሊቢያ ስታስወጣ ጋና 100 አይቬሪኮስት ደግሞ ከ155 በላይ ዜጎቿን ከሊቢያው ከባድ ሰብአዊ ቀውስ ታድገዋል ያለው ሮይተርስ ነው፡፡ ", "ሃና ጋዜጠኛ ናት። የቴሌቪዥን ሪፖርተር ሆና ሠርታለች። ነገር ግን ከአንድ ዓመት ከግማሽ በፊት የራሷን መንገድ መከተል መረጠች። ከቴሌቪዥን ሪፖርተርነት ራሷን ካገለለች ወዲህ በኅብረተሰብ አገልግሎት ሥራ ላይ ተሰማርታ ትገኛለች። \n\nከቅርብ ጊዜ ወዲህ ደግሞ ጥቁር አሜሪካውያን ላይ የሚደርሰውን ጭቆና በመቃወም የመብት ተሟጋችነት መድረክን ተቀላቅላለች። ከዚያም አልፎ ለእናትና ለአባቷ አገራት ኢትዮጵያና ኤርትራ የተለያዩ የእርዳታ ሥራዎችን እንደምትሰራ ትናገራለች። በአሁኑ ወቅት በዩትዩብና ፌስቡክ ገፆቿ ሥራዎቿን ታስተዋውቃለች።\n\nሃና ከጥቂት ሳምንታት በፊት በኮሮናቫይረስ የቅርብ ዘመዷን እንዳጣች በትዊተር ገጿ ላይ አስፍራ ነበር። ከዚህም አልፎ እሷም ተመርምራ ኮቪድ-19 እንዳለባት ማወቋን በይፋ በመናገር ሌሎች በእሷ ዕድሜ ያሉ ወጣቶች ራሳቸውን እንዲጠብቁ በተደጋጋሚ መክራለች። \n\nለመሆኑ ሃና ኮሮናቫይረስ እንዴት ሊያገኛት ቻለ? \n\nየበሽታውምልክት\n\nየበሽታው ምልክት የጀመረኝ ሰኔ 23 [ጁን 30] ገደማ ነው። ከዚያ በፊት ባሉት ሳምንታት የተለያዩ የተቃውሞ ሰልፎች ላይ ስሳተፍ ነበር። ነገር ግን በዚህ ወቅት ምንም ዓይነት የቫይረሱ ምልክት አልታየብኝም። ሁሌም የአፍና የአፍንጫ መሸፈኛ ጭምብል እጠቀም ነበር። ሳኒታይዘርም ይዤ ነበር የምንቀሳቀሰው። \n\nምልክቱ መጀመሪያ ሲጀምረኝ መካከለኛ የሚባል ነበር። ከዚያ ግን ወዲያው በጣም እየከፋ መጣ። የመጀመሪያው ምልክት ራስ ምታት ነበር። በጣም ከባድ ራስ ምታት። ከዚያ ያቅለሸልሸኝ ጀመር። ጥርሴን ለመፋቅ ብሩሽ ስጠቀም ይሁን አሊያም ምግብ ልመገብ ስል ወደላይ ይለኛል ግን አያስመልሰኝም። ይህ ሁሉ ስሜት የተሰማኝ በአንድ ሌሊት ነው። \n\nከዚያ በኋላ ባሉት ቀናት የተለያዩ ስሜቶችን አስተናግጃለሁ። ሌላኛው ስሜት ደግሞ መፍዘዝ ነበር፤ በጣም ይደክመኝም ነበር። የምግብ ፍላጎቴም እጅጉን ቀንሶ ነበር። አስታውሳለሁ ቁጭ ብዬ ምግብ እየላሁ ከትንሽ ጉርሻ በኋላ የምግቡ ጣዕም እየጠፋብኝ መጣ።\n\nከእነዚህ ሁሉ ምልክቶች ግን የከፋ የነበረው ድንገት ሌሊት ላይ የጀመረኝ ምልክት ነው። ይህም የትንፋሽ ማጠር ነው። በጣም ያስደነገጠኝ እሱ ነበር። \n\nይህን ጊዜ ነው ወደ ሕክምና ተቋም ሄጄ የተመረመርኩት። \n\nከዚያ በፊት ግን የተለያዩ ተቃውሞዎች ላይ ተሳትፌ ስለነበር ምልክቱ ባይኖረኝም ጥቂት ጊዜያት ተመርምሬ ነበር። በዚህኛው ዙር ግን ከተመርመርኩ በኋላ ራሴን አግልዬ ተቀመጥኩ። ይህንን ያደረግኩት ከሰኔ 24 ጀምሮ ነው። \n\nውጤትጥበቃ\n\nከተመረመርኩ በኋላ ያለው ጊዜ በጣም አስጨናቂ ነበር። በተለይ ውጤቱ መጠበቅ እጅግ ግራ አጋቢ ስሜት ነበረው። ምንም እንኳ በሽታው እንደሚኖርብኝ ብጠረጥርም፤ ቢኖርብኝስ የሚል ጥያቄ ሃሳብ ደጋግሞ ይመጣብኛል። ከየት ይሆን ያገኘሁት? ለሰው አስተላልፌውስ ቢሆን? አለብሽ ከተባልኩ ምን እሆናለሁ? ብቻ የተለያዩ ሃሳቦች ወደ አእምሮዬ ይመጡ ነበር። \n\nየበሽታው ምልክት ሳይታይብኝ የተመረመርኩ ጊዜ ውጤት የመጣልኝ ወዲያው ነበር። ነገር ግን በሁለተኛው ዙር ይመስለኛል ከአምስት ቀናት በኋላ ነው ውጤቴን የሰማሁት። \n\nበጆርጂያ ግዛት ቫይረሱ የሚገኝባቸው ሰዎች ቁጥር እየጨመረ በመምጣቱ ውጤት ይፋ የሚሆንበት ቀንም እየጨመረ መጥቶ ነበር። \n\nበስተመጨረሻ የተመረመርኩ ጊዜ ውጤቴን በፅሑፍ መልዕክት የነገሩኝ ከዘጠኝ ቀናት በኋላ ነው። ደውለው ውጤትሽ 'ፖዘቲቭ' ነው እና ራስሽይን አግልይ አሉኝ። እንግዲህ ይሄ የሆነው እኔ ራሴን አግልዬ ከቆየሁ በኋላ ነው።\n\nቫይረሱ ሊይዘኝ ይችላል ብለሽ አስበሽ ታውቂያለሽ? ብባል፤ መልሴ \"አዎ\" ነው። ምክንያቱም በየትኛውም ጊዜ ከቤቴ ከወጣሁ ሊይዘኝ እንደሚችል አስባለሁ። \n\nእንዲያውም ወረርሽኙ የገባ ሰሞን ለሦስት... "]}, {"source_sentence": "የእሁዱ የፋሲል ከነማ ጨዋታ የቴሌቪዥን ሽፋን ያገኛል", "sentences": ["አዲስ አበባ ፣ ህዳር 12 ፣2013 (ኤፍ.ቢ.ሲ) በሚድሮክ ግሩፕ የተገነባው ባለ 25 ፎቅ ህንጻ ስራ ጀምረየአዲስ አበባ ምክትል ከንቲባ ወይዘሮ አዳነች አቤቤ በማህበራዊ የትስስር ገጻቸው ላይ÷በሚድሮክ ግሩፕ በከተማችን ውብ ሆኖ ተገንብቶ ለ13 አመታት ያለ ስራ ቆሞ የነበረውን ባለ 25 ፎቅ ሚና ህንጻን ዛሬ ስራ በማስጀመራችን ደስ ብሎኛል ብለዋል።በሚና ህንጻ አዲስ የተከፈተውን የተለያዩ የሃገራችንን ምርቶች በአንድ ቦታ ለሸማቾች እንዲቀርብ እየሰራ ያለውን ኩዊንስ ሱፐር ማርኬትንም መርቀናል ነው ያሉት።ከንቲባዋ አያይዘውም የእንደዚህ አይነት የዘመናዊ የችርቻሮ አውታር መብዛትና መስፋፋት የነዋሪያችን የገበያ ፍላጎት በማሟላት በአምራችና ሸማች መሃከል ያለውን የግብይት ሰንሰለት ያሳጥራል ሲሉ ገልጸዋል።ይህም የኑሮ ውድነት እና የዋጋ ንረትን በመቀነስ እንዲሁም የስራ እድልን በመፍጠር ረገድ ትልቅ ሚና አለውም ነው ያሉት።በጦር ሃይሎች አካባቢ የተከፈተውን ተመሳሳይ የገበያ ማዕከል ከወር በፊት ጎብኝተናል በሌሎች አካባቢም እንዲከፈቱ ባቀረብነው ጥያቄ መሰረት ተግባራዊ ምላሽ ስለሰጡ የሚድሮክ ግሩፕ አመራሮችን ከልብ አመሰግናለሁ ብለዋል።አሁንም እንዲህ አይነት የህዝቡን ኑሮ መደጎም የሚችሉ ኢንቨስትመንቶችን እናበረታታለንም ነው ያሉት።", "ግብፅ ወደ ናይል ትብብር ማዕቀፍ ለመቀላቀል ያቀረበችውን ጥያቄ የትብብር ማዕቀፍ አባል ሃገራት የሚኒስትሮች ምክር ቤት ውድቅ ማድረጉ የውሃ መስኖና ኤሌክትሪክ ሚኒስትር አስታወቀ ።ሚኒስትሩ ዶክተር ኢንጂነር ስለሺ ጌታሁን እንደገለጹትበመጋቢት ወር በዩጋንዳ ኢንተቤ በተደረገው የሚኒስትሮች ጉባኤ ግብፅ ወደ ትብብር ማዕቀፉ ለመመለስ ያቀረበችው ጥያቄ ውድቅ ሆኗል።የውሃ መስኖና ኤሌክትሪክ ሚኒስትሩ ዶክተር ኢንጂነር ስለሺ ጌታሁን፤ ግብፅ በ1959 የቅኝ ግዛት ዘመን ስምምነት መሰረት የውሃ አጠቃቀም ተግባራዊ ሊደረግ ይገባል የሚል ሀሳብ ይዛ መቅረቧ ጥያቄው ውድቅ እንዲሆን እንዳደረገው ነው ያስታወቁት።የግብጽ ሃሳብ ኢትዮጵያ እንደሃገር የማትቀበለውና የትብብር ማዕቀፉ የቆመላቸውን ምሰሶዎች የሚያፈርስ መሆኑንም ሚኒስትሩ አስገንዝበዋል።ኢትዮጵያን ጨምሮ ከተፋሰሱ ሀብት ያልተጠቀሙ ሃገራትን ተጠቃሚ በሚያደርግ መልኩ የትብብር ማዕቀፉ ሲዘጋጅ እንደ ሃገር የተደረገው ክርክርም ይህን ለማስቀረት ያለመ እንደነበርም አንስተዋል።በታላቁ ህዳሴ ግድብ የውሃ አያያዝ፣ አለቃቀቅና ማህበራዊ ተፅዕኖን አስመልክቶ ቀጣይ ጥናት እንዲያካሂዱ የተመረጡት ሁለቱ የፈረንሳይ ኩባንያዎችም፤ ጥናቱን በምን መልኩ ለማካሄድ እንደተዘጋጁ ለሶስቱ ሃገራት ሪፖርታቸውን አቅርበው ሃገራቱ ምላሻቸውን ለኩባንያዎቹ ማቅረባቸውን ሚኒስትሩ ገልጸዋል።ኢትዮጵያም በጥናቱ ከግድቡ አሉታዊ ተፅዕኖዎች ባለፈ ያለውን አወንታዊ ተፅዕኖና ጠቀሜታ በጥናቱ እንዲካተት ሃሳቧን አቅርባለችም ነው ያሉት።ባለፈው ወር በግብፅ፣ ትናንት ደግሞ በአዲስ አበባ ሶስቱ ሃገራት በጉዳዩ ዙሪያ ውይይት ካደረጉ በኋላ ስምምነት በተደረገባቸው ጉዳዮች ላይ ኩባንያዎቹ ጥናት ማካሄድ ጀምረዋል።የናይል ተፋሰስ ትብብር ማዕቀፍ ስምምነት የተፈጥሮ ሀብትን በጋራና በፍትሃዊነት ለመጠቀም ኢትዮጵያ፣ ዲሞክራቲክ ሪፐብሊክ ኮንጎ፣ ዩጋንዳ፣ ታንዛኒያ፣ ሩዋንዳ፣ ብሩንዲ፣ ኬኒያ፣ ሱዳንና ደቡብ ሱዳን የተስማሙበት እንደሆነም አብራርተዋል ።ኢትዮጵያን ጨምሮ አምስቱ አባል ሃገራት ስምምነቱን በምክር ቤቶቻቸው ሲያፅድቁ ሌሎች አባል ሃገራት በሂደት ላይ ይገኛሉ።የትብብር ማዕቀፉን አንቀጾች ባለመቀበል ግብፅ ከትብብሩ ብትርቅም ባለፉት ዘጠኝ ዓመታት የትብብሩ አንቀጾች እየተቀበለች መምጣቷን የውሃ መስኖና ኤሌክትሪክ ጠቅሰዋል።ባለፉት ጥቂት ወራት ግብፅ ወደ ናይል ትብብር ማዕቀፍ ለመመለስ ጥያቄ አቅርባም ከአባል ሃገራቱ በተዋቀረው የሚኒስትሮች ምክር ቤት የተመረጡ የሱዳን፣ ሩዋንዳና ዩጋንዳ ሚኒስትሮች ጉዳዩን ሲመረምሩ ቆይተዋል ብለዋል ።በሁለቱ ውይይቶች ስምምነት ባልተደረሰባቸው ጉዳዮች ደግሞ በቀጣይ በካርቱም የሶስትዮሹ ውይይት የሚቀጥል ይሆናል።ሚኒስትሩ ከግድቡ ግንባታ ጋር ተያይዞ የታላቁ ህዳሴ ግድብ ግንባታ አሁን ላይ 58 ነጥብ 4 በመቶ ተጠናቋል-(ኤፍ ቢ ሲ) ። ", "በካፍ ኮንፌዴሬሽን ዋንጫ ፋሲል ከነማ ከታንዛንያው አዛም ጋር የሚያደርገው ጨዋታ የቀጥታ የቴሌቪዥን ስርጭት እንደሚያገኝ ታውቋል።ሃዋሳ ከተማን በመለያ ምት በማሸነፍ ወደ አህጉራዊ ውድድሮች በታሪካቸው ለመጀመሪያ ጊዜ ያመሩት ፋሲል ከነማዎች እሁድ ከታንዛኒያው ክለብ አዛም ጋር የመጀመሪያ የማጣሪያ ጨዋታቸውን ያደርጋሉ። በባህር ዳር ዓለምአቀፍ ስታዲየም የሚደረገው ይህ ጨዋታም በአማራ ብዙሃን መገናኛ ድርጅት (አማራ ቲቪ) በቀጥታ እንደሚተላለፍ ታውቋል። ከተቋሙ ባገኘነው መረጃ መሰረት ጨዋታውን ለማስተላለፍ ስራዎች እየተሰሩ መሆናቸው ሲገለፅ መቀመጫውን ታንዛኒያ ያደረገው አዛም ቲቪም ከቴሌቪዥን ጣቢያው (አማራ ቲቪ) ጋር ተስማምቶ ጨዋታውን ለማስተላለፍ እንደተወሰነ እና መከፈል ያለበትን ክፍያ ለጣቢያው እንደፈፀመ ተነግሯል።በተያያዘ ዜና ጨዋታውን የሚመሩት ሱዳናዊው ዳኞች ባህር ዳር መግባታቸው ተረጋግጣል። ሶከር ኢትዮጵያም ጨዋታውን በቀጥታ የፅሁፍ ስርጭት ለስፖርት ቤተሰቡ እንደሚያደርስ ለማሳወቅ ይወዳል።"]}, {"source_sentence": "ኤም ቲ ኤን ለአልቃይዳና ታሊባን ጉቦ በመስጠት ተወነጀለ", "sentences": ["ኩባንያው ለእነዚህ ቡድኖች ገንዘብ የሰጠው አፍጋኒስታን ውስጥ የሚገኝና ትልቅ ኢንቨስትመንት ያደረገባቸው የኔትዎርክ ታዎሮች ላይ ጥቃት እንዳይደርስበት ለጥበቃ ነው ተብሏል።\n\nበውንጀላው መቀመጫቸውን አሜሪካ ያደረጉ ሌሎች አምስት ኩባንያዎችም ያሉ ሲሆን ክሱ የቀረበው አፍጋኒስታን ውስጥ በተገደሉ የአሜሪካ ዜጎች ስም ነው።\n\n• ጃዋር ኦሮሞ ፌደራሊስት ኮንግረስን መቀላቀሉ ተረጋገጠ\n\n• ሱዳን በ29 ግለሰቦች ላይ የሞት ፍርድ አስተላለፈች\n\n• \"ሙስና ለመጣው ፖለቲካዊ ለውጥ አንድ ምክንያት ነው\" \n\nበቀረበው ክስ እንደተባለው ሁለቱ አሸባሪ ቡድኖች ከኤም ቲ ኤን ያገኙትን ገንዘብ እንደ አውሮፓውያኑ ከ2009 እስከ 2017 አፍጋኒስታን ውስጥ ላካሄዷቸው የጥቃት ዘመቻዎች ተጠቅመውበታል።\n\nይህ ደግሞ የአሜሪካን የፀረ ሽብር አዋጅን የሚፃረር ነው፤ ስለዚህም ኤም ቲ ኤን ይህን ህግ ተላልፏል ተብሏል።\n\nኩባንያው ግን በየትኛውም ቦታ ስራውን የሚያካሂደው ሃላፊነት በተሞላበት መንገድ እንደሆነ ገልጿል።\n\nኤም ቲ ኤን በአፍሪካ ግዙፉ፤ በአለም አቀፍ ደረጃ ደግሞ ከስምንት ትልልቅ ኩባንያዎች አንዱ ነው። ከ240 ሚሊዮን ተጠቃሚዎች በላይም አለው ተብሎ ይታመናል።\n\nበ2015 ያልተመዘገቡ ሲም ካርዶችን ባለመሰረዝ በናይጄሪያ ባለስልጣናት በቀረበበት ክስ 5 ቢሊዮን ዶላር እንዲቀጣ ተፈርዶበት፤ ከረዥም ክርክር በኋላ እንዲሁም የያኔው የደቡብ አፍሪካ ፕሬዝዳንት ጃኮም ዙማ በጉዳዩ ጣልቃ ገብተው ቅጣቱ ወደ 1.7 ቢሊዮን ዶላር እንደወረደለት የሚታወስ ነው።\n\nየዛሬ ዓመትም በኢራን የቀድሞ የደቡብ አፍሪካ አምባሳደር ኤምቲኤን ኢራን ውስጥ እንዲሰራና የ 31.6 ቢሊዮን ዶላር ፕሮጀክት እንዲያሸንፍ ጉቦ ተቀብለዋል በሚል መታሰራቸውም ይታወሳል።\n\n ", "አዲስ አበባ፣ ጥቅምት 5፣ 2013 (ኤፍ.ቢ.ሲ) ፈረንሳይ በሀገሪቱ ዳግም እያገረሸ የመጣውን የኮሮና ቫይረስ ወረርሽኝ ስርጭትን ለመግታት ፓሪስን ጨምሮ በስምንት ከተሞች የሰዓት እላፊ ልትጥል ነው።", "ባለፉት 20 አመታት ዴሞክራሲን በማስረፅ ረገድ የህዝቦችን ተሳትፎ የቃኘ ጥናት ይፋ በሆነበት ወቅት እንደተገለፀው፤ በርካታ የሚዲያ ተቋማት የዴሞክራሲ እሴቶችን አጉልቶ በማውጣት ረገድ ሰፊ ውስንነቶች ታተውባቸዋል፡፡ባለፉት ዓመታት በርካታዎቹ የስነ-ምግባር መርሆዎችን ሳይጠብቁ የመዘገብ ዝንባሌ ነበራቸው ተብሏል፡፡በኢትዮጵያ ስራ አመራር ኢኒስቲትዩት በተካሄደውና ከተለያዩ የህብረተሰብ ክፍሎች የተወከሉ ዜጎች በተሳተፉበት አውደ ጥናት በዋናነት በዴሞክራሲ ስርፀት ዙሪያ የዜጎችን ምልከታ፣ አተገባበርና ተፅእኖን በመገምገም መፍትሄን ማመላከት እንደሆነ ተገልጿል፡፡ሙሉ በሙሉ በአዲስ አበባ ከተማ ላይ ትኩረቱን ባደረገው በዚህ ጥናት ፖለቲከኞች ሀገራዊ አጀንዳዎች ላይ በማተኮር ሀሳብ ከማመንጨት ይልቅ በግላዊ ጥቅሞቻቸው ላይ ማተኮራቸው ሀላፊነታቸውን በአግባቡ እንዳይወጡ ማድረጉ ተነስቷል፡፡ዜጎችም ቢሆኑ ነገሮችን በሰከነ ሁኔታ ከማጤን ይልቅ በስሜታዊነት ወደ አላስፈላጊ ግጭቶች የማምራት ሁኔታ ሲስተዋልባቸው እንደነበር ያመላከተው ጥናቱ፤ ይህም ዴሞክራሲ ስር እንዳይሰድ የራሱን ተፅዕኖ ማሳደሩን ተነስቷል፡፡በመንግስት በኩል የታዩ ክፍተቶችንም ጥናቱ ያመላከተ ሲሆን፤ በተለይም ለአሰራር ምቹ ያልሆኑ አደረጃጀቶችን ያለመተቸት ችግር፣ በፓርቲዎች የግል አቋም ጣልቃ መግባትን ጨምሮ የዴሞክራሲ ተቋማትን ተናበውና ተቀናጅተው እንዲሰሩ ያለማድረግ ችግሮች ተስተውለዋል፡፡በመሆኑም ሁሉም ባለድርሻ አካላት ለዴሞክራሲ ስርፀጥ ሚናውን በአግባቡ እንዲወጣ በመድረኩ ጥሪ ቀርቧል፡፡"]}], "model-index": [{"name": "BERT Amharic Text Embedding Small", "results": [{"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 512", "type": "dim_512"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.5560931357726036, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.6976111279104928, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.7535530692470517, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.8097973994557001, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.5560931357726036, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.23253704263683098, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.15071061384941034, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.08097973994557, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.5560931357726036, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.6976111279104928, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.7535530692470517, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.8097973994557001, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.6809741097507487, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.6399296105903295, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.6457194091829267, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 256", "type": "dim_256"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.5512549138191715, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.6924705170849713, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.7393407922588449, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.807983066223163, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.5512549138191715, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.2308235056949904, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.14786815845176898, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.0807983066223163, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.5512549138191715, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.6924705170849713, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.7393407922588449, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.807983066223163, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.6754787963348321, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.6336093231769057, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.6391902642253109, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 128", "type": "dim_128"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.5382521923193226, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.6785606289688539, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.7323858482007862, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.7943755669791351, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.5382521923193226, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.2261868763229513, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.14647716964015725, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.07943755669791351, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.5382521923193226, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.6785606289688539, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.7323858482007862, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.7943755669791351, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.6626715802528078, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.6209357495644148, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.6268762533351724, "name": "Cosine Map@100"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,246 |
HuggingFaceTB/SmolLM2-360M-Instruct
|
HuggingFaceTB
|
text-generation
|
[
"transformers",
"tensorboard",
"onnx",
"safetensors",
"llama",
"text-generation",
"transformers.js",
"conversational",
"en",
"arxiv:2502.02737",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-10-31T13:41:35Z |
2025-02-19T16:08:01+00:00
| 69,161 | 102 |
---
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- safetensors
- onnx
- transformers.js
---
# SmolLM2

## Table of Contents
1. [Model Summary](##model-summary)
2. [Limitations](##limitations)
3. [Training](##training)
4. [License](##license)
5. [Citation](##citation)
## Model Summary
SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device. More details in our paper: https://arxiv.org/abs/2502.02737
SmolLM2 demonstrates significant advances over its predecessor SmolLM1, particularly in instruction following, knowledge, reasoning. The 360M model was trained on 4 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new filtered datasets we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
The instruct model additionally supports tasks such as text rewriting, summarization and function calling (for the 1.7B) thanks to datasets developed by [Argilla](https://huggingface.co/argilla) such as [Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1).
You can find the SFT dataset here: https://huggingface.co/datasets/HuggingFaceTB/smol-smoltalk and finetuning code in the [alignement handbook](https://github.com/huggingface/alignment-handbook/tree/main/recipes/smollm2)
For more details refer to: https://github.com/huggingface/smollm. You will find pre-training, post-training, evaluation and local inference code.
### How to use
### Transformers
```bash
pip install transformers
```
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "HuggingFaceTB/SmolLM2-360M-Instruct"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
messages = [{"role": "user", "content": "What is the capital of France."}]
input_text=tokenizer.apply_chat_template(messages, tokenize=False)
print(input_text)
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True)
print(tokenizer.decode(outputs[0]))
```
### Chat in TRL
You can also use the TRL CLI to chat with the model from the terminal:
```bash
pip install trl
trl chat --model_name_or_path HuggingFaceTB/SmolLM2-360M-Instruct --device cpu
```
## Evaluation
In this section, we report the evaluation results of SmolLM2. All evaluations are zero-shot unless stated otherwise, and we use [lighteval](https://github.com/huggingface/lighteval) to run them.
## Base Pre-Trained Model
| Metrics | SmolLM2-360M | Qwen2.5-0.5B | SmolLM-360M |
|:-------------------|:------------:|:------------:|:------------:|
| HellaSwag | **54.5** | 51.2 | 51.8 |
| ARC (Average) | **53.0** | 45.4 | 50.1 |
| PIQA | **71.7** | 69.9 | 71.6 |
| MMLU (cloze) | **35.8** | 33.7 | 34.4 |
| CommonsenseQA | **38.0** | 31.6 | 35.3 |
| TriviaQA | **16.9** | 4.3 | 9.1 |
| Winogrande | 52.5 | **54.1** | 52.8 |
| OpenBookQA | **37.4** | **37.4** | 37.2 |
| GSM8K (5-shot) | 3.2 | **33.4** | 1.6 |
## Instruction Model
| Metric | SmolLM2-360M-Instruct | Qwen2.5-0.5B-Instruct | SmolLM-360M-Instruct |
|:-----------------------------|:---------------------:|:---------------------:|:---------------------:|
| IFEval (Average prompt/inst) | **41.0** | 31.6 | 19.8 |
| MT-Bench | 3.66 | **4.16** | 3.37 |
| HellaSwag | **52.1** | 48.0 | 47.9 |
| ARC (Average) | **43.7** | 37.3 | 38.8 |
| PIQA | **70.8** | 67.2 | 69.4 |
| MMLU (cloze) | **32.8** | 31.7 | 30.6 |
| BBH (3-shot) | 27.3 | **30.7** | 24.4 |
| GSM8K (5-shot) | 7.43 | **26.8** | 1.36 |
## Limitations
SmolLM2 models primarily understand and generate content in English. They can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.
## Training
### Model
- **Architecture:** Transformer decoder
- **Pretraining tokens:** 4T
- **Precision:** bfloat16
### Hardware
- **GPUs:** 64 H100
### Software
- **Training Framework:** [nanotron](https://github.com/huggingface/nanotron/tree/main)
## License
[Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Citation
```bash
@misc{allal2025smollm2smolgoesbig,
title={SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model},
author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martín Blázquez and Guilherme Penedo and Lewis Tunstall and Andrés Marafioti and Hynek Kydlíček and Agustín Piqueres Lajarín and Vaibhav Srivastav and Joshua Lochner and Caleb Fahlgren and Xuan-Son Nguyen and Clémentine Fourrier and Ben Burtenshaw and Hugo Larcher and Haojun Zhao and Cyril Zakka and Mathieu Morlon and Colin Raffel and Leandro von Werra and Thomas Wolf},
year={2025},
eprint={2502.02737},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.02737},
}
```
| null |
Non_BioNLP
|
# SmolLM2

## Table of Contents
1. [Model Summary](##model-summary)
2. [Limitations](##limitations)
3. [Training](##training)
4. [License](##license)
5. [Citation](##citation)
## Model Summary
SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device. More details in our paper: https://arxiv.org/abs/2502.02737
SmolLM2 demonstrates significant advances over its predecessor SmolLM1, particularly in instruction following, knowledge, reasoning. The 360M model was trained on 4 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new filtered datasets we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
The instruct model additionally supports tasks such as text rewriting, summarization and function calling (for the 1.7B) thanks to datasets developed by [Argilla](https://huggingface.co/argilla) such as [Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1).
You can find the SFT dataset here: https://huggingface.co/datasets/HuggingFaceTB/smol-smoltalk and finetuning code in the [alignement handbook](https://github.com/huggingface/alignment-handbook/tree/main/recipes/smollm2)
For more details refer to: https://github.com/huggingface/smollm. You will find pre-training, post-training, evaluation and local inference code.
### How to use
### Transformers
```bash
pip install transformers
```
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "HuggingFaceTB/SmolLM2-360M-Instruct"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
messages = [{"role": "user", "content": "What is the capital of France."}]
input_text=tokenizer.apply_chat_template(messages, tokenize=False)
print(input_text)
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True)
print(tokenizer.decode(outputs[0]))
```
### Chat in TRL
You can also use the TRL CLI to chat with the model from the terminal:
```bash
pip install trl
trl chat --model_name_or_path HuggingFaceTB/SmolLM2-360M-Instruct --device cpu
```
## Evaluation
In this section, we report the evaluation results of SmolLM2. All evaluations are zero-shot unless stated otherwise, and we use [lighteval](https://github.com/huggingface/lighteval) to run them.
## Base Pre-Trained Model
| Metrics | SmolLM2-360M | Qwen2.5-0.5B | SmolLM-360M |
|:-------------------|:------------:|:------------:|:------------:|
| HellaSwag | **54.5** | 51.2 | 51.8 |
| ARC (Average) | **53.0** | 45.4 | 50.1 |
| PIQA | **71.7** | 69.9 | 71.6 |
| MMLU (cloze) | **35.8** | 33.7 | 34.4 |
| CommonsenseQA | **38.0** | 31.6 | 35.3 |
| TriviaQA | **16.9** | 4.3 | 9.1 |
| Winogrande | 52.5 | **54.1** | 52.8 |
| OpenBookQA | **37.4** | **37.4** | 37.2 |
| GSM8K (5-shot) | 3.2 | **33.4** | 1.6 |
## Instruction Model
| Metric | SmolLM2-360M-Instruct | Qwen2.5-0.5B-Instruct | SmolLM-360M-Instruct |
|:-----------------------------|:---------------------:|:---------------------:|:---------------------:|
| IFEval (Average prompt/inst) | **41.0** | 31.6 | 19.8 |
| MT-Bench | 3.66 | **4.16** | 3.37 |
| HellaSwag | **52.1** | 48.0 | 47.9 |
| ARC (Average) | **43.7** | 37.3 | 38.8 |
| PIQA | **70.8** | 67.2 | 69.4 |
| MMLU (cloze) | **32.8** | 31.7 | 30.6 |
| BBH (3-shot) | 27.3 | **30.7** | 24.4 |
| GSM8K (5-shot) | 7.43 | **26.8** | 1.36 |
## Limitations
SmolLM2 models primarily understand and generate content in English. They can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.
## Training
### Model
- **Architecture:** Transformer decoder
- **Pretraining tokens:** 4T
- **Precision:** bfloat16
### Hardware
- **GPUs:** 64 H100
### Software
- **Training Framework:** [nanotron](https://github.com/huggingface/nanotron/tree/main)
## License
[Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Citation
```bash
@misc{allal2025smollm2smolgoesbig,
title={SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model},
author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martín Blázquez and Guilherme Penedo and Lewis Tunstall and Andrés Marafioti and Hynek Kydlíček and Agustín Piqueres Lajarín and Vaibhav Srivastav and Joshua Lochner and Caleb Fahlgren and Xuan-Son Nguyen and Clémentine Fourrier and Ben Burtenshaw and Hugo Larcher and Haojun Zhao and Cyril Zakka and Mathieu Morlon and Colin Raffel and Leandro von Werra and Thomas Wolf},
year={2025},
eprint={2502.02737},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.02737},
}
```
|
{"language": ["en"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["safetensors", "onnx", "transformers.js"]}
|
task
|
[
"SUMMARIZATION"
] | 42,247 |
QuantFactory/Bielik-7B-v0.1-GGUF
|
QuantFactory
| null |
[
"transformers",
"gguf",
"continuously_pretrained",
"pl",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-08-31T04:06:06Z |
2024-08-31T04:41:06+00:00
| 59 | 2 |
---
language:
- pl
library_name: transformers
license: apache-2.0
tags:
- continuously_pretrained
inference:
parameters:
temperature: 0.7
---

# QuantFactory/Bielik-7B-v0.1-GGUF
This is quantized version of [speakleash/Bielik-7B-v0.1](https://huggingface.co/speakleash/Bielik-7B-v0.1) created using llama.cpp
# Original Model Card
<p align="center">
<img src="https://huggingface.co/speakleash/Bielik-7B-v0.1/raw/main/speakleash_cyfronet.png">
</p>
# Bielik-7B-v0.1
The Bielik-7B-v0.1 is a generative text model featuring 7 billion parameters, meticulously evolved from its predecessor, the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1), through processing of over 70 billion tokens. Forementioned model stands as a testament to the unique collaboration between the open-science/open-souce project SpeakLeash and the High Performance Computing (HPC) center: ACK Cyfronet AGH. Developed and trained on Polish text corpora, which has been cherry-picked and processed by the SpeakLeash team, this endeavor leverages Polish large-scale computing infrastructure, specifically within the PLGrid environment, and more precisely, the HPC centers: ACK Cyfronet AGH. The creation and training of the Bielik-7B-v0.1 was propelled by the support of computational grant number PLG/2024/016951, conducted on the Athena and Helios supercomputer, enabling the use of cutting-edge technology and computational resources essential for large-scale machine learning processes. As a result, the model exhibits an exceptional ability to understand and process the Polish language, providing accurate responses and performing a variety of linguistic tasks with high precision.
⚠️ This is a base model intended for further fine-tuning across most use cases. If you're looking for a model ready for chatting or following instructions out-of-the-box, please use [Bielik-7B-Instruct-v0.1](https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1).
🎥 Demo: https://huggingface.co/spaces/speakleash/Bielik-7B-Instruct-v0.1
🗣️ Chat Arena<span style="color:red;">*</span>: https://arena.speakleash.org.pl/
<span style="color:red;">*</span>Chat Arena is a platform for testing and comparing different AI language models, allowing users to evaluate their performance and quality.
## Model
Bielik-7B-v0.1 has been trained with the use of an original open source framework called [ALLaMo](https://github.com/chrisociepa/allamo) implemented by [Krzysztof Ociepa](https://www.linkedin.com/in/krzysztof-ociepa-44886550/). This framework allows users to train language models with architecture similar to LLaMA and Mistral in fast and efficient way.
The model training was conducted on the Helios Supercomputer at the ACK Cyfronet AGH, utilizing 256 NVidia GH200 cards while achieving a throughput exceeding 9200 tokens/gpu/second.
The training dataset was composed of Polish texts collected and made available through the [SpeakLeash](https://speakleash.org/) project. We used over 36 billion tokens for two epochs of training.
### Model description:
* **Developed by:** [SpeakLeash](https://speakleash.org/)
* **Language:** Polish
* **Model type:** causal decoder-only
* **Adopted from:** [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
* **License:** Apache 2.0 (commercial use allowed)
* **Model ref:** speakleash:debfc8635c781358e8db833a333887a5
### Quality evaluation
A XGBoost classification model was prepared and created to evaluate the quality of texts in native Polish language. It is based on 93 features, such as the ratio of out-of-vocabulary words to all words (OOVs), the number of nouns, verbs, average sentence length etc. The model outputs the category of a given document (either HIGH, MEDIUM or LOW) along with the probability. This approach allows implementation of dedicated pipeline to choose documents, from which we've used entries with HIGH quality index and probability exceeding 90%.
This filtration and appropriate selection of texts enable the provision of a condensed and high-quality database of texts in Polish for training purposes.
## Training
* Framework: [ALLaMo](https://github.com/chrisociepa/allamo)
* Visualizations: [W&B](https://wandb.ai)
<p align="center">
<img src="https://huggingface.co/speakleash/Bielik-7B-v0.1/raw/main/train_loss.png">
</p>
<p align="center">
<img src="https://huggingface.co/speakleash/Bielik-7B-v0.1/raw/main/train_ppl.png">
</p>
<p align="center">
<img src="https://huggingface.co/speakleash/Bielik-7B-v0.1/raw/main/train_acc.png">
</p>
### Training hyperparameters:
| **Hyperparameter** | **Value** |
|-----------------------------|------------------|
| Context length | 4096 |
| Micro Batch Size | 4 |
| Batch Size | 4194304 |
| Learning Rate (cosine) | 3e-05 -> 2e-05 |
| Warmup Iterations | 2000 |
| All Iterations | 17350 |
| Optimizer | AdamW |
| β1, β2 | 0.9, 0.95 |
| Adam_eps | 1e−8 |
| Weight Decay | 0.1 |
| Grad Clip | 1.0 |
| Precision | bfloat16 (mixed) |
### Quickstart
This model can be easily loaded using the AutoModelForCausalLM functionality.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "speakleash/Bielik-7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
```
In order to reduce the memory usage, you can use smaller precision (`bfloat16`).
```python
import torch
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16)
```
And then you can use Hugging Face Pipelines to generate text:
```python
import transformers
text = "Najważniejszym celem człowieka na ziemi jest"
pipeline = transformers.pipeline("text-generation", model=model, tokenizer=tokenizer)
sequences = pipeline(max_new_tokens=100, do_sample=True, top_k=50, eos_token_id=tokenizer.eos_token_id, text_inputs=text)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
Generated output:
> Najważniejszym celem człowieka na ziemi jest życie w pokoju, harmonii i miłości. Dla każdego z nas bardzo ważne jest, aby otaczać się kochanymi osobami.
## Evaluation
Models have been evaluated on [Open PL LLM Leaderboard](https://huggingface.co/spaces/speakleash/open_pl_llm_leaderboard) 5-shot. The benchmark evaluates models in NLP tasks like sentiment analysis, categorization, text classification but does not test chatting skills. Here are presented:
- Average - average score among all tasks normalized by baseline scores
- Reranking - reranking task, commonly used in RAG
- Reader (Generator) - open book question answering task, commonly used in RAG
- Perplexity (lower is better) - as a bonus, does not correlate with other scores and should not be used for model comparison
As of April 3, 2024, the following table showcases the current scores of pretrained and continuously pretrained models according to the Open PL LLM Leaderboard, evaluated in a 5-shot setting:
| | Average | RAG Reranking | RAG Reader | Perplexity |
|--------------------------------------------------------------------------------------|----------:|--------------:|-----------:|-----------:|
| **7B parameters models:** | | | | |
| Baseline (majority class) | 0.00 | 53.36 | - | - |
| OPI-PG/Qra-7b | 11.13 | 54.40 | 75.25 | 203.36 |
| meta-llama/Llama-2-7b-hf | 12.73 | 54.02 | 77.92 | 850.45 |
| internlm/internlm2-base-7b | 20.68 | 52.39 | 69.85 | 3110.92 |
| [Bielik-7B-v0.1](https://huggingface.co/speakleash/Bielik-7B-v0.1) | 29.38 | **62.13** | **88.39** | 123.31 |
| mistralai/Mistral-7B-v0.1 | 30.67 | 60.35 | 85.39 | 857.32 |
| internlm/internlm2-7b | 33.03 | 69.39 | 73.63 | 5498.23 |
| alpindale/Mistral-7B-v0.2-hf | 33.05 | 60.23 | 85.21 | 932.60 |
| speakleash/mistral-apt3-7B/spi-e0_hf (experimental) | **35.50** | **62.14** | 87.48 | 132.78 |
| | | | | |
| **Models with different sizes:** | | | | |
| sdadas/polish-gpt2-xl (1.7B) | -23.22 | 48.07 | 3.04 | 160.95 |
| Azurro/APT3-1B-Base (1B) | -8.23 | 51.49 | 18.94 | 249.90 |
| OPI-PG/Qra-1b (1B) | -5.44 | 47.65 | 38.51 | 398.96 |
| internlm/internlm2-1_8b (1.8B) | -2.78 | 49.37 | 31.88 | 60296.30 |
| OPI-PG/Qra-13b (13B) | 29.03 | 53.28 | 83.03 | 168.66 |
| upstage/SOLAR-10.7B-v1.0 (10.7B) | 38.12 | 75.81 | 86.39 | 641.05 |
| | | | | |
| **Polish instruction fine-tuned models:** | | | | |
| szymonrucinski/Curie-7B-v1 | 26.72 | 55.58 | 85.19 | 389.17 |
| Voicelab/trurl-2-7b | 18.85 | 60.67 | 77.19 | 1098.88 |
| [Bielik-7B-Instruct-v0.1](https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1) | 39.28 | 61.89 | 86.00 | 277.92 |
As you can see, Bielik-7B-v0.1 does not have the best Average score, but it has some clear advantages, e.g. the best score in the RAG Reader task.
The results in the above table were obtained without utilizing instruction templates for instructional models, instead treating them like base models.
This approach could skew the results, as instructional models are optimized with specific instructions in mind.
## Limitations and Biases
Bielik-7B-v0.1 is not intended for deployment without fine-tuning. It should not be used for human-facing interactions without further guardrails and user consent.
Bielik-7B-v0.1 can produce factually incorrect output, and should not be relied on to produce factually accurate data. Bielik-7B-v0.1 was trained on various public datasets. While great efforts have been taken to clear the training data, it is possible that this model can generate lewd, false, biased or otherwise offensive outputs.
## License
The model is licensed under Apache 2.0, which allows for commercial use.
## Citation
Please cite this model using the following format:
```
@misc{Bielik7Bv01,
title = {Introducing Bielik-7B-v0.1: Polish Language Model},
author = {Ociepa, Krzysztof and Flis, Łukasz and Wróbel, Krzysztof and Gwoździej, Adrian and {SpeakLeash Team} and {Cyfronet Team}},
year = {2024},
url = {https://huggingface.co/speakleash/Bielik-7B-v0.1},
note = {Accessed: 2024-04-01}, % change this date
urldate = {2024-04-01} % change this date
}
```
## Responsible for training the model
* [Krzysztof Ociepa](https://www.linkedin.com/in/krzysztof-ociepa-44886550/)<sup>SpeakLeash</sup> - team leadership, conceptualizing, data preparation, process optimization and oversight of training
* [Łukasz Flis](https://www.linkedin.com/in/lukasz-flis-0a39631/)<sup>Cyfronet AGH</sup> - coordinating and supervising the training
* [Adrian Gwoździej](https://www.linkedin.com/in/adrgwo/)<sup>SpeakLeash</sup> - data cleaning and quality
* [Krzysztof Wróbel](https://www.linkedin.com/in/wrobelkrzysztof/)<sup>SpeakLeash</sup> - benchmarks
The model could not have been created without the commitment and work of the entire SpeakLeash team, whose contribution is invaluable. Thanks to the hard work of many individuals, it was possible to gather a large amount of content in Polish and establish collaboration between the open-science SpeakLeash project and the HPC center: ACK Cyfronet AGH. Individuals who contributed to the creation of the model through their commitment to the open-science SpeakLeash project:
[Sebastian Kondracki](https://www.linkedin.com/in/sebastian-kondracki/),
[Maria Filipkowska](https://www.linkedin.com/in/maria-filipkowska/),
[Grzegorz Urbanowicz](https://www.linkedin.com/in/grzegorz-urbanowicz-05823469/),
[Szymon Baczyński](https://www.linkedin.com/in/szymon-baczynski/),
[Paweł Kiszczak](https://www.linkedin.com/in/paveu-kiszczak/),
[Igor Ciuciura](https://www.linkedin.com/in/igor-ciuciura-1763b52a6/),
[Paweł Cyrta](https://www.linkedin.com/in/cyrta),
[Jacek Chwiła](https://www.linkedin.com/in/jacek-chwila/),
[Jan Maria Kowalski](https://www.linkedin.com/in/janmariakowalski/),
[Karol Jezierski](https://www.linkedin.com/in/karol-jezierski/),
[Kamil Nonckiewicz](https://www.linkedin.com/in/kamil-nonckiewicz/),
[Izabela Babis](https://www.linkedin.com/in/izabela-babis-2274b8105/),
[Nina Babis](https://www.linkedin.com/in/nina-babis-00055a140/),
[Waldemar Boszko](https://www.linkedin.com/in/waldemarboszko),
[Remigiusz Kinas](https://www.linkedin.com/in/remigiusz-kinas/),
[Piotr Rybak](https://www.linkedin.com/in/piotrrybak/)
and many other wonderful researchers and enthusiasts of the AI world.
Members of the ACK Cyfronet AGH team providing valuable support and expertise:
[Szymon Mazurek](https://www.linkedin.com/in/sz-mazurek-ai/).
## Contact Us
If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.gg/3G9DVM39).
| null |
Non_BioNLP
|

# QuantFactory/Bielik-7B-v0.1-GGUF
This is quantized version of [speakleash/Bielik-7B-v0.1](https://huggingface.co/speakleash/Bielik-7B-v0.1) created using llama.cpp
# Original Model Card
<p align="center">
<img src="https://huggingface.co/speakleash/Bielik-7B-v0.1/raw/main/speakleash_cyfronet.png">
</p>
# Bielik-7B-v0.1
The Bielik-7B-v0.1 is a generative text model featuring 7 billion parameters, meticulously evolved from its predecessor, the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1), through processing of over 70 billion tokens. Forementioned model stands as a testament to the unique collaboration between the open-science/open-souce project SpeakLeash and the High Performance Computing (HPC) center: ACK Cyfronet AGH. Developed and trained on Polish text corpora, which has been cherry-picked and processed by the SpeakLeash team, this endeavor leverages Polish large-scale computing infrastructure, specifically within the PLGrid environment, and more precisely, the HPC centers: ACK Cyfronet AGH. The creation and training of the Bielik-7B-v0.1 was propelled by the support of computational grant number PLG/2024/016951, conducted on the Athena and Helios supercomputer, enabling the use of cutting-edge technology and computational resources essential for large-scale machine learning processes. As a result, the model exhibits an exceptional ability to understand and process the Polish language, providing accurate responses and performing a variety of linguistic tasks with high precision.
⚠️ This is a base model intended for further fine-tuning across most use cases. If you're looking for a model ready for chatting or following instructions out-of-the-box, please use [Bielik-7B-Instruct-v0.1](https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1).
🎥 Demo: https://huggingface.co/spaces/speakleash/Bielik-7B-Instruct-v0.1
🗣️ Chat Arena<span style="color:red;">*</span>: https://arena.speakleash.org.pl/
<span style="color:red;">*</span>Chat Arena is a platform for testing and comparing different AI language models, allowing users to evaluate their performance and quality.
## Model
Bielik-7B-v0.1 has been trained with the use of an original open source framework called [ALLaMo](https://github.com/chrisociepa/allamo) implemented by [Krzysztof Ociepa](https://www.linkedin.com/in/krzysztof-ociepa-44886550/). This framework allows users to train language models with architecture similar to LLaMA and Mistral in fast and efficient way.
The model training was conducted on the Helios Supercomputer at the ACK Cyfronet AGH, utilizing 256 NVidia GH200 cards while achieving a throughput exceeding 9200 tokens/gpu/second.
The training dataset was composed of Polish texts collected and made available through the [SpeakLeash](https://speakleash.org/) project. We used over 36 billion tokens for two epochs of training.
### Model description:
* **Developed by:** [SpeakLeash](https://speakleash.org/)
* **Language:** Polish
* **Model type:** causal decoder-only
* **Adopted from:** [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
* **License:** Apache 2.0 (commercial use allowed)
* **Model ref:** speakleash:debfc8635c781358e8db833a333887a5
### Quality evaluation
A XGBoost classification model was prepared and created to evaluate the quality of texts in native Polish language. It is based on 93 features, such as the ratio of out-of-vocabulary words to all words (OOVs), the number of nouns, verbs, average sentence length etc. The model outputs the category of a given document (either HIGH, MEDIUM or LOW) along with the probability. This approach allows implementation of dedicated pipeline to choose documents, from which we've used entries with HIGH quality index and probability exceeding 90%.
This filtration and appropriate selection of texts enable the provision of a condensed and high-quality database of texts in Polish for training purposes.
## Training
* Framework: [ALLaMo](https://github.com/chrisociepa/allamo)
* Visualizations: [W&B](https://wandb.ai)
<p align="center">
<img src="https://huggingface.co/speakleash/Bielik-7B-v0.1/raw/main/train_loss.png">
</p>
<p align="center">
<img src="https://huggingface.co/speakleash/Bielik-7B-v0.1/raw/main/train_ppl.png">
</p>
<p align="center">
<img src="https://huggingface.co/speakleash/Bielik-7B-v0.1/raw/main/train_acc.png">
</p>
### Training hyperparameters:
| **Hyperparameter** | **Value** |
|-----------------------------|------------------|
| Context length | 4096 |
| Micro Batch Size | 4 |
| Batch Size | 4194304 |
| Learning Rate (cosine) | 3e-05 -> 2e-05 |
| Warmup Iterations | 2000 |
| All Iterations | 17350 |
| Optimizer | AdamW |
| β1, β2 | 0.9, 0.95 |
| Adam_eps | 1e−8 |
| Weight Decay | 0.1 |
| Grad Clip | 1.0 |
| Precision | bfloat16 (mixed) |
### Quickstart
This model can be easily loaded using the AutoModelForCausalLM functionality.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "speakleash/Bielik-7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
```
In order to reduce the memory usage, you can use smaller precision (`bfloat16`).
```python
import torch
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16)
```
And then you can use Hugging Face Pipelines to generate text:
```python
import transformers
text = "Najważniejszym celem człowieka na ziemi jest"
pipeline = transformers.pipeline("text-generation", model=model, tokenizer=tokenizer)
sequences = pipeline(max_new_tokens=100, do_sample=True, top_k=50, eos_token_id=tokenizer.eos_token_id, text_inputs=text)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
Generated output:
> Najważniejszym celem człowieka na ziemi jest życie w pokoju, harmonii i miłości. Dla każdego z nas bardzo ważne jest, aby otaczać się kochanymi osobami.
## Evaluation
Models have been evaluated on [Open PL LLM Leaderboard](https://huggingface.co/spaces/speakleash/open_pl_llm_leaderboard) 5-shot. The benchmark evaluates models in NLP tasks like sentiment analysis, categorization, text classification but does not test chatting skills. Here are presented:
- Average - average score among all tasks normalized by baseline scores
- Reranking - reranking task, commonly used in RAG
- Reader (Generator) - open book question answering task, commonly used in RAG
- Perplexity (lower is better) - as a bonus, does not correlate with other scores and should not be used for model comparison
As of April 3, 2024, the following table showcases the current scores of pretrained and continuously pretrained models according to the Open PL LLM Leaderboard, evaluated in a 5-shot setting:
| | Average | RAG Reranking | RAG Reader | Perplexity |
|--------------------------------------------------------------------------------------|----------:|--------------:|-----------:|-----------:|
| **7B parameters models:** | | | | |
| Baseline (majority class) | 0.00 | 53.36 | - | - |
| OPI-PG/Qra-7b | 11.13 | 54.40 | 75.25 | 203.36 |
| meta-llama/Llama-2-7b-hf | 12.73 | 54.02 | 77.92 | 850.45 |
| internlm/internlm2-base-7b | 20.68 | 52.39 | 69.85 | 3110.92 |
| [Bielik-7B-v0.1](https://huggingface.co/speakleash/Bielik-7B-v0.1) | 29.38 | **62.13** | **88.39** | 123.31 |
| mistralai/Mistral-7B-v0.1 | 30.67 | 60.35 | 85.39 | 857.32 |
| internlm/internlm2-7b | 33.03 | 69.39 | 73.63 | 5498.23 |
| alpindale/Mistral-7B-v0.2-hf | 33.05 | 60.23 | 85.21 | 932.60 |
| speakleash/mistral-apt3-7B/spi-e0_hf (experimental) | **35.50** | **62.14** | 87.48 | 132.78 |
| | | | | |
| **Models with different sizes:** | | | | |
| sdadas/polish-gpt2-xl (1.7B) | -23.22 | 48.07 | 3.04 | 160.95 |
| Azurro/APT3-1B-Base (1B) | -8.23 | 51.49 | 18.94 | 249.90 |
| OPI-PG/Qra-1b (1B) | -5.44 | 47.65 | 38.51 | 398.96 |
| internlm/internlm2-1_8b (1.8B) | -2.78 | 49.37 | 31.88 | 60296.30 |
| OPI-PG/Qra-13b (13B) | 29.03 | 53.28 | 83.03 | 168.66 |
| upstage/SOLAR-10.7B-v1.0 (10.7B) | 38.12 | 75.81 | 86.39 | 641.05 |
| | | | | |
| **Polish instruction fine-tuned models:** | | | | |
| szymonrucinski/Curie-7B-v1 | 26.72 | 55.58 | 85.19 | 389.17 |
| Voicelab/trurl-2-7b | 18.85 | 60.67 | 77.19 | 1098.88 |
| [Bielik-7B-Instruct-v0.1](https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1) | 39.28 | 61.89 | 86.00 | 277.92 |
As you can see, Bielik-7B-v0.1 does not have the best Average score, but it has some clear advantages, e.g. the best score in the RAG Reader task.
The results in the above table were obtained without utilizing instruction templates for instructional models, instead treating them like base models.
This approach could skew the results, as instructional models are optimized with specific instructions in mind.
## Limitations and Biases
Bielik-7B-v0.1 is not intended for deployment without fine-tuning. It should not be used for human-facing interactions without further guardrails and user consent.
Bielik-7B-v0.1 can produce factually incorrect output, and should not be relied on to produce factually accurate data. Bielik-7B-v0.1 was trained on various public datasets. While great efforts have been taken to clear the training data, it is possible that this model can generate lewd, false, biased or otherwise offensive outputs.
## License
The model is licensed under Apache 2.0, which allows for commercial use.
## Citation
Please cite this model using the following format:
```
@misc{Bielik7Bv01,
title = {Introducing Bielik-7B-v0.1: Polish Language Model},
author = {Ociepa, Krzysztof and Flis, Łukasz and Wróbel, Krzysztof and Gwoździej, Adrian and {SpeakLeash Team} and {Cyfronet Team}},
year = {2024},
url = {https://huggingface.co/speakleash/Bielik-7B-v0.1},
note = {Accessed: 2024-04-01}, % change this date
urldate = {2024-04-01} % change this date
}
```
## Responsible for training the model
* [Krzysztof Ociepa](https://www.linkedin.com/in/krzysztof-ociepa-44886550/)<sup>SpeakLeash</sup> - team leadership, conceptualizing, data preparation, process optimization and oversight of training
* [Łukasz Flis](https://www.linkedin.com/in/lukasz-flis-0a39631/)<sup>Cyfronet AGH</sup> - coordinating and supervising the training
* [Adrian Gwoździej](https://www.linkedin.com/in/adrgwo/)<sup>SpeakLeash</sup> - data cleaning and quality
* [Krzysztof Wróbel](https://www.linkedin.com/in/wrobelkrzysztof/)<sup>SpeakLeash</sup> - benchmarks
The model could not have been created without the commitment and work of the entire SpeakLeash team, whose contribution is invaluable. Thanks to the hard work of many individuals, it was possible to gather a large amount of content in Polish and establish collaboration between the open-science SpeakLeash project and the HPC center: ACK Cyfronet AGH. Individuals who contributed to the creation of the model through their commitment to the open-science SpeakLeash project:
[Sebastian Kondracki](https://www.linkedin.com/in/sebastian-kondracki/),
[Maria Filipkowska](https://www.linkedin.com/in/maria-filipkowska/),
[Grzegorz Urbanowicz](https://www.linkedin.com/in/grzegorz-urbanowicz-05823469/),
[Szymon Baczyński](https://www.linkedin.com/in/szymon-baczynski/),
[Paweł Kiszczak](https://www.linkedin.com/in/paveu-kiszczak/),
[Igor Ciuciura](https://www.linkedin.com/in/igor-ciuciura-1763b52a6/),
[Paweł Cyrta](https://www.linkedin.com/in/cyrta),
[Jacek Chwiła](https://www.linkedin.com/in/jacek-chwila/),
[Jan Maria Kowalski](https://www.linkedin.com/in/janmariakowalski/),
[Karol Jezierski](https://www.linkedin.com/in/karol-jezierski/),
[Kamil Nonckiewicz](https://www.linkedin.com/in/kamil-nonckiewicz/),
[Izabela Babis](https://www.linkedin.com/in/izabela-babis-2274b8105/),
[Nina Babis](https://www.linkedin.com/in/nina-babis-00055a140/),
[Waldemar Boszko](https://www.linkedin.com/in/waldemarboszko),
[Remigiusz Kinas](https://www.linkedin.com/in/remigiusz-kinas/),
[Piotr Rybak](https://www.linkedin.com/in/piotrrybak/)
and many other wonderful researchers and enthusiasts of the AI world.
Members of the ACK Cyfronet AGH team providing valuable support and expertise:
[Szymon Mazurek](https://www.linkedin.com/in/sz-mazurek-ai/).
## Contact Us
If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.gg/3G9DVM39).
|
{"language": ["pl"], "library_name": "transformers", "license": "apache-2.0", "tags": ["continuously_pretrained"], "inference": {"parameters": {"temperature": 0.7}}}
|
task
|
[
"TEXT_CLASSIFICATION",
"QUESTION_ANSWERING"
] | 42,248 |
faizalbs777/news-summarization-finetuned-mistral-7b
|
faizalbs777
| null |
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | 2024-05-08T09:19:10Z |
2024-11-03T00:54:23+00:00
| 4 | 0 |
---
base_model: mistralai/Mistral-7B-Instruct-v0.2
datasets:
- generator
library_name: peft
license: apache-2.0
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: news-summarization-finetuned-mistral-7b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# news-summarization-finetuned-mistral-7b
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0477
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.1549 | 0.0109 | 20 | 2.1256 |
| 2.0143 | 0.0219 | 40 | 2.0855 |
| 2.0595 | 0.0328 | 60 | 2.0559 |
| 1.9444 | 0.0437 | 80 | 2.0500 |
| 2.1425 | 0.0547 | 100 | 2.0477 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.4.1+cu121
- Datasets 2.16.0
- Tokenizers 0.20.0
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# news-summarization-finetuned-mistral-7b
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0477
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.1549 | 0.0109 | 20 | 2.1256 |
| 2.0143 | 0.0219 | 40 | 2.0855 |
| 2.0595 | 0.0328 | 60 | 2.0559 |
| 1.9444 | 0.0437 | 80 | 2.0500 |
| 2.1425 | 0.0547 | 100 | 2.0477 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.4.1+cu121
- Datasets 2.16.0
- Tokenizers 0.20.0
|
{"base_model": "mistralai/Mistral-7B-Instruct-v0.2", "datasets": ["generator"], "library_name": "peft", "license": "apache-2.0", "tags": ["trl", "sft", "generated_from_trainer"], "model-index": [{"name": "news-summarization-finetuned-mistral-7b", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 42,249 |
openai/whisper-medium.en
|
openai
|
automatic-speech-recognition
|
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"whisper",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"en",
"arxiv:2212.04356",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | 2022-09-26T07:02:02Z |
2024-01-22T17:55:36+00:00
| 28,581 | 52 |
---
language:
- en
license: apache-2.0
pipeline_tag: automatic-speech-recognition
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: whisper-medium.en
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- type: wer
value: 4.120542365210176
name: Test WER
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- type: wer
value: 7.431640255663553
name: Test WER
---
# Whisper
Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours
of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need
for fine-tuning.
Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356)
by Alec Radford et al. from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper).
**Disclaimer**: Content for this model card has partly been written by the Hugging Face team, and parts of it were
copied and pasted from the original model card.
## Model details
Whisper is a Transformer based encoder-decoder model, also referred to as a _sequence-to-sequence_ model.
It was trained on 680k hours of labelled speech data annotated using large-scale weak supervision.
The models were trained on either English-only data or multilingual data. The English-only models were trained
on the task of speech recognition. The multilingual models were trained on both speech recognition and speech
translation. For speech recognition, the model predicts transcriptions in the *same* language as the audio.
For speech translation, the model predicts transcriptions to a *different* language to the audio.
Whisper checkpoints come in five configurations of varying model sizes.
The smallest four are trained on either English-only or multilingual data.
The largest checkpoints are multilingual only. All ten of the pre-trained checkpoints
are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The
checkpoints are summarised in the following table with links to the models on the Hub:
| Size | Parameters | English-only | Multilingual |
|----------|------------|------------------------------------------------------|-----------------------------------------------------|
| tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) |
| base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) |
| small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) |
| medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) |
| large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) |
| large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) |
# Usage
This checkpoint is an *English-only* model, meaning it can be used for English speech recognition. Multilingual speech
recognition or speech translation is possible through use of a multilingual checkpoint.
To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor).
The `WhisperProcessor` is used to:
1. Pre-process the audio inputs (converting them to log-Mel spectrograms for the model)
2. Post-process the model outputs (converting them from tokens to text)
## Transcription
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-medium.en")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium.en")
>>> # load dummy dataset and read audio files
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False)
['<|startoftranscript|><|notimestamps|> Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.<|endoftext|>']
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.']
```
The context tokens can be removed from the start of the transcription by setting `skip_special_tokens=True`.
## Evaluation
This code snippet shows how to evaluate Whisper medium.en on [LibriSpeech test-clean](https://huggingface.co/datasets/librispeech_asr):
```python
>>> from datasets import load_dataset
>>> from transformers import WhisperForConditionalGeneration, WhisperProcessor
>>> import torch
>>> from evaluate import load
>>> librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test")
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-medium.en")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium.en").to("cuda")
>>> def map_to_pred(batch):
>>> audio = batch["audio"]
>>> input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features
>>> batch["reference"] = processor.tokenizer._normalize(batch['text'])
>>>
>>> with torch.no_grad():
>>> predicted_ids = model.generate(input_features.to("cuda"))[0]
>>> transcription = processor.decode(predicted_ids)
>>> batch["prediction"] = processor.tokenizer._normalize(transcription)
>>> return batch
>>> result = librispeech_test_clean.map(map_to_pred)
>>> wer = load("wer")
>>> print(100 * wer.compute(references=result["reference"], predictions=result["prediction"]))
3.0154449620004904
```
## Long-Form Transcription
The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking
algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers
[`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. With chunking enabled, the pipeline
can be run with batched inference. It can also be extended to predict sequence level timestamps by passing `return_timestamps=True`:
```python
>>> import torch
>>> from transformers import pipeline
>>> from datasets import load_dataset
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> pipe = pipeline(
>>> "automatic-speech-recognition",
>>> model="openai/whisper-medium.en",
>>> chunk_length_s=30,
>>> device=device,
>>> )
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> prediction = pipe(sample.copy(), batch_size=8)["text"]
" Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel."
>>> # we can also return timestamps for the predictions
>>> prediction = pipe(sample.copy(), batch_size=8, return_timestamps=True)["chunks"]
[{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.',
'timestamp': (0.0, 5.44)}]
```
Refer to the blog post [ASR Chunking](https://huggingface.co/blog/asr-chunking) for more details on the chunking algorithm.
## Fine-Tuning
The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However,
its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog
post [Fine-Tune Whisper with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper) provides a step-by-step
guide to fine-tuning the Whisper model with as little as 5 hours of labelled data.
### Evaluated Use
The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research.
The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them.
In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes.
## Training Data
The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages.
As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language.
## Performance and Limitations
Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level.
However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself.
Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf).
In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages.
## Broader Implications
We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications.
There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects.
### BibTeX entry and citation info
```bibtex
@misc{radford2022whisper,
doi = {10.48550/ARXIV.2212.04356},
url = {https://arxiv.org/abs/2212.04356},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
| null |
Non_BioNLP
|
# Whisper
Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours
of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need
for fine-tuning.
Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356)
by Alec Radford et al. from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper).
**Disclaimer**: Content for this model card has partly been written by the Hugging Face team, and parts of it were
copied and pasted from the original model card.
## Model details
Whisper is a Transformer based encoder-decoder model, also referred to as a _sequence-to-sequence_ model.
It was trained on 680k hours of labelled speech data annotated using large-scale weak supervision.
The models were trained on either English-only data or multilingual data. The English-only models were trained
on the task of speech recognition. The multilingual models were trained on both speech recognition and speech
translation. For speech recognition, the model predicts transcriptions in the *same* language as the audio.
For speech translation, the model predicts transcriptions to a *different* language to the audio.
Whisper checkpoints come in five configurations of varying model sizes.
The smallest four are trained on either English-only or multilingual data.
The largest checkpoints are multilingual only. All ten of the pre-trained checkpoints
are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The
checkpoints are summarised in the following table with links to the models on the Hub:
| Size | Parameters | English-only | Multilingual |
|----------|------------|------------------------------------------------------|-----------------------------------------------------|
| tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) |
| base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) |
| small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) |
| medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) |
| large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) |
| large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) |
# Usage
This checkpoint is an *English-only* model, meaning it can be used for English speech recognition. Multilingual speech
recognition or speech translation is possible through use of a multilingual checkpoint.
To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor).
The `WhisperProcessor` is used to:
1. Pre-process the audio inputs (converting them to log-Mel spectrograms for the model)
2. Post-process the model outputs (converting them from tokens to text)
## Transcription
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-medium.en")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium.en")
>>> # load dummy dataset and read audio files
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False)
['<|startoftranscript|><|notimestamps|> Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.<|endoftext|>']
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.']
```
The context tokens can be removed from the start of the transcription by setting `skip_special_tokens=True`.
## Evaluation
This code snippet shows how to evaluate Whisper medium.en on [LibriSpeech test-clean](https://huggingface.co/datasets/librispeech_asr):
```python
>>> from datasets import load_dataset
>>> from transformers import WhisperForConditionalGeneration, WhisperProcessor
>>> import torch
>>> from evaluate import load
>>> librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test")
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-medium.en")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium.en").to("cuda")
>>> def map_to_pred(batch):
>>> audio = batch["audio"]
>>> input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features
>>> batch["reference"] = processor.tokenizer._normalize(batch['text'])
>>>
>>> with torch.no_grad():
>>> predicted_ids = model.generate(input_features.to("cuda"))[0]
>>> transcription = processor.decode(predicted_ids)
>>> batch["prediction"] = processor.tokenizer._normalize(transcription)
>>> return batch
>>> result = librispeech_test_clean.map(map_to_pred)
>>> wer = load("wer")
>>> print(100 * wer.compute(references=result["reference"], predictions=result["prediction"]))
3.0154449620004904
```
## Long-Form Transcription
The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking
algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers
[`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. With chunking enabled, the pipeline
can be run with batched inference. It can also be extended to predict sequence level timestamps by passing `return_timestamps=True`:
```python
>>> import torch
>>> from transformers import pipeline
>>> from datasets import load_dataset
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> pipe = pipeline(
>>> "automatic-speech-recognition",
>>> model="openai/whisper-medium.en",
>>> chunk_length_s=30,
>>> device=device,
>>> )
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> prediction = pipe(sample.copy(), batch_size=8)["text"]
" Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel."
>>> # we can also return timestamps for the predictions
>>> prediction = pipe(sample.copy(), batch_size=8, return_timestamps=True)["chunks"]
[{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.',
'timestamp': (0.0, 5.44)}]
```
Refer to the blog post [ASR Chunking](https://huggingface.co/blog/asr-chunking) for more details on the chunking algorithm.
## Fine-Tuning
The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However,
its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog
post [Fine-Tune Whisper with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper) provides a step-by-step
guide to fine-tuning the Whisper model with as little as 5 hours of labelled data.
### Evaluated Use
The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research.
The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them.
In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes.
## Training Data
The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages.
As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language.
## Performance and Limitations
Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level.
However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself.
Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf).
In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages.
## Broader Implications
We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications.
There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects.
### BibTeX entry and citation info
```bibtex
@misc{radford2022whisper,
doi = {10.48550/ARXIV.2212.04356},
url = {https://arxiv.org/abs/2212.04356},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
|
{"language": ["en"], "license": "apache-2.0", "pipeline_tag": "automatic-speech-recognition", "tags": ["audio", "automatic-speech-recognition", "hf-asr-leaderboard"], "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}, {"example_title": "Librispeech sample 2", "src": "https://cdn-media.huggingface.co/speech_samples/sample2.flac"}], "model-index": [{"name": "whisper-medium.en", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "LibriSpeech (clean)", "type": "librispeech_asr", "config": "clean", "split": "test", "args": {"language": "en"}}, "metrics": [{"type": "wer", "value": 4.120542365210176, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "LibriSpeech (other)", "type": "librispeech_asr", "config": "other", "split": "test", "args": {"language": "en"}}, "metrics": [{"type": "wer", "value": 7.431640255663553, "name": "Test WER"}]}]}]}
|
task
|
[
"TRANSLATION"
] | 42,250 |
joshuapb/fine-tuned-matryoshka-100
|
joshuapb
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:100",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-08-23T22:45:47Z |
2024-08-23T22:46:11+00:00
| 4 | 0 |
---
base_model: BAAI/bge-base-en-v1.5
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:100
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: 'Fig. 8. The accuracy of instruct-GPT series models of different
sizes (left to right, small to large). Larger model doing better on binary classification
of answerable and unanswerable questions in SelfAware eval. (Image source: Yin
et al. 2023)
Another way to assess the model’s awareness of unknown knowledge is to measure
the model’s output uncertainty. When a question is in-between known and unknown,
the model is expected to demonstrate the right level of confidence.
The experiment by Kadavath et al. (2022) showed that LLMs are shown to be well
calibrated in their estimation probabilities of answer correctness on diverse
multiple choice questions in a format with visible lettered answer options (MMLU,
TruthfulQA, QuALITY, LogiQA), meaning that the predicted probability coincides
with the frequency of that answer being true. RLHF fine-tuning makes the model
poorly calibrated, but higher sampling temperature leads to better calibration
results.'
sentences:
- What effect does the slower acquisition of new knowledge compared to established
knowledge have on the effectiveness of large language models in practical scenarios?
- How do discrepancies identified during the final output review phase affect the
overall quality of the generated responses?
- What effect does reinforcement learning from human feedback (RLHF) fine-tuning
have on how well large language models assess the accuracy of their answers?
- source_sentence: 'Fig. 1. Knowledge categorization of close-book QA examples based
on how likely the model outputs correct answers. (Image source: Gekhman et al.
2024)
Some interesting observations of the experiments, where dev set accuracy is considered
a proxy for hallucinations.
Unknown examples are fitted substantially slower than Known.
The best dev performance is obtained when the LLM fits the majority of the Known
training examples but only a few of the Unknown ones. The model starts to hallucinate
when it learns most of the Unknown examples.
Among Known examples, MaybeKnown cases result in better overall performance, more
essential than HighlyKnown ones.'
sentences:
- What is the relationship between the structural formatting of inquiries and the
occurrence of calibration errors in artificial intelligence models, and in what
ways can this understanding contribute to the optimization of model training processes?
- What are the benefits of integrating a pretrained Natural Language Inference (NLI)
model with MPNet when assessing the reliability of reasoning paths in knowledge
retrieval?
- In what ways do the classifications of Known versus Unknown examples influence
the propensity of AI models to generate hallucinations during their training processes?
- source_sentence: 'Fig. 3. The evaluation framework for the FactualityPrompt benchmark.(Image
source: Lee, et al. 2022)
Given the model continuation and paired Wikipedia text, two evaluation metrics
for hallucination are considered:
Hallucination NE (Named Entity) errors: Using a pretrained entity detection model
and document-level grounding, this metric measures the fraction of detected named
entities that do not appear in the ground truth document.
Entailment ratios: Using a RoBERTa model fine-tuned on MNLI and sentence-level
knowledge grounding, this metric calculates the fraction of generated sentences
that are marked as relevant to the paired Wikipedia sentence by the entailment
model.'
sentences:
- What impact does the implementation of a pretrained query-document relevance model
have on the process of document selection in research methodologies?
- In what ways does the sequence in which information is delivered in AI-generated
responses influence the likelihood of generating inaccuracies or hallucinations?
- In what ways does the FactualityPrompt benchmark assess the performance of named
entity detection models, particularly in relation to errors arising from hallucinated
named entities?
- source_sentence: 'Fig. 1. Knowledge categorization of close-book QA examples based
on how likely the model outputs correct answers. (Image source: Gekhman et al.
2024)
Some interesting observations of the experiments, where dev set accuracy is considered
a proxy for hallucinations.
Unknown examples are fitted substantially slower than Known.
The best dev performance is obtained when the LLM fits the majority of the Known
training examples but only a few of the Unknown ones. The model starts to hallucinate
when it learns most of the Unknown examples.
Among Known examples, MaybeKnown cases result in better overall performance, more
essential than HighlyKnown ones.'
sentences:
- In what ways does the inherently adversarial structure of TruthfulQA inquiries
facilitate the detection of prevalent fallacies in human cognitive processes,
and what implications does this have for understanding the constraints of expansive
language models?
- In what ways do MaybeKnown cases influence the performance of a model when contrasted
with HighlyKnown examples, particularly in relation to the occurrence of hallucinations?
- In what ways does the Self-RAG framework leverage reflection tokens to enhance
the quality of its generated outputs, and what implications does this have for
the overall generation process?
- source_sentence: 'Fine-tuning New Knowledge#
Fine-tuning a pre-trained LLM via supervised fine-tuning and RLHF is a common
technique for improving certain capabilities of the model like instruction following.
Introducing new knowledge at the fine-tuning stage is hard to avoid.
Fine-tuning usually consumes much less compute, making it debatable whether the
model can reliably learn new knowledge via small-scale fine-tuning. Gekhman et
al. 2024 studied the research question of whether fine-tuning LLMs on new knowledge
encourages hallucinations. They found that (1) LLMs learn fine-tuning examples
with new knowledge slower than other examples with knowledge consistent with the
pre-existing knowledge of the model; (2) Once the examples with new knowledge
are eventually learned, they increase the model’s tendency to hallucinate.'
sentences:
- How does the IsRel token function in the retrieval process, and what impact does
it have on the relevance of generated content to reduce hallucination?
- What is the relationship between the calibration of AI models and the effectiveness
of verbalized probabilities when applied to tasks of varying difficulty levels?
- How do the results presented by Gekhman et al. in their 2024 study inform our
understanding of the reliability metrics associated with large language models
(LLMs) when subjected to fine-tuning with novel datasets?
model-index:
- name: BGE base Financial Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.828125
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9635416666666666
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9739583333333334
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9947916666666666
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.828125
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3211805555555556
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1947916666666666
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09947916666666667
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.828125
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9635416666666666
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9739583333333334
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9947916666666666
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9220150687007592
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8976707175925925
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8981047453703703
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.8020833333333334
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9635416666666666
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9739583333333334
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9895833333333334
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8020833333333334
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3211805555555556
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1947916666666666
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09895833333333333
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8020833333333334
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9635416666666666
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9739583333333334
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9895833333333334
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9077325270335209
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.880220734126984
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8810414411976911
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.796875
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9583333333333334
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.96875
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9791666666666666
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.796875
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3194444444444445
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19374999999999998
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09791666666666665
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.796875
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9583333333333334
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.96875
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9791666666666666
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9011377823848584
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8746155753968253
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8757564484126984
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.7864583333333334
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9322916666666666
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9635416666666666
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9635416666666666
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7864583333333334
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3107638888888889
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19270833333333334
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09635416666666667
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7864583333333334
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9322916666666666
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9635416666666666
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9635416666666666
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.888061438431803
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8623263888888889
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8647421480429293
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.6875
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8645833333333334
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9270833333333334
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.96875
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6875
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2881944444444445
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.18541666666666665
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09687499999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6875
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8645833333333334
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9270833333333334
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.96875
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8335872598831777
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7895895337301586
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7917890681938919
name: Cosine Map@100
---
# BGE base Financial Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("joshuapb/fine-tuned-matryoshka-100")
# Run inference
sentences = [
'Fine-tuning New Knowledge#\nFine-tuning a pre-trained LLM via supervised fine-tuning and RLHF is a common technique for improving certain capabilities of the model like instruction following. Introducing new knowledge at the fine-tuning stage is hard to avoid.\nFine-tuning usually consumes much less compute, making it debatable whether the model can reliably learn new knowledge via small-scale fine-tuning. Gekhman et al. 2024 studied the research question of whether fine-tuning LLMs on new knowledge encourages hallucinations. They found that (1) LLMs learn fine-tuning examples with new knowledge slower than other examples with knowledge consistent with the pre-existing knowledge of the model; (2) Once the examples with new knowledge are eventually learned, they increase the model’s tendency to hallucinate.',
'How do the results presented by Gekhman et al. in their 2024 study inform our understanding of the reliability metrics associated with large language models (LLMs) when subjected to fine-tuning with novel datasets?',
'What is the relationship between the calibration of AI models and the effectiveness of verbalized probabilities when applied to tasks of varying difficulty levels?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.8281 |
| cosine_accuracy@3 | 0.9635 |
| cosine_accuracy@5 | 0.974 |
| cosine_accuracy@10 | 0.9948 |
| cosine_precision@1 | 0.8281 |
| cosine_precision@3 | 0.3212 |
| cosine_precision@5 | 0.1948 |
| cosine_precision@10 | 0.0995 |
| cosine_recall@1 | 0.8281 |
| cosine_recall@3 | 0.9635 |
| cosine_recall@5 | 0.974 |
| cosine_recall@10 | 0.9948 |
| cosine_ndcg@10 | 0.922 |
| cosine_mrr@10 | 0.8977 |
| **cosine_map@100** | **0.8981** |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:----------|
| cosine_accuracy@1 | 0.8021 |
| cosine_accuracy@3 | 0.9635 |
| cosine_accuracy@5 | 0.974 |
| cosine_accuracy@10 | 0.9896 |
| cosine_precision@1 | 0.8021 |
| cosine_precision@3 | 0.3212 |
| cosine_precision@5 | 0.1948 |
| cosine_precision@10 | 0.099 |
| cosine_recall@1 | 0.8021 |
| cosine_recall@3 | 0.9635 |
| cosine_recall@5 | 0.974 |
| cosine_recall@10 | 0.9896 |
| cosine_ndcg@10 | 0.9077 |
| cosine_mrr@10 | 0.8802 |
| **cosine_map@100** | **0.881** |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7969 |
| cosine_accuracy@3 | 0.9583 |
| cosine_accuracy@5 | 0.9688 |
| cosine_accuracy@10 | 0.9792 |
| cosine_precision@1 | 0.7969 |
| cosine_precision@3 | 0.3194 |
| cosine_precision@5 | 0.1937 |
| cosine_precision@10 | 0.0979 |
| cosine_recall@1 | 0.7969 |
| cosine_recall@3 | 0.9583 |
| cosine_recall@5 | 0.9688 |
| cosine_recall@10 | 0.9792 |
| cosine_ndcg@10 | 0.9011 |
| cosine_mrr@10 | 0.8746 |
| **cosine_map@100** | **0.8758** |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7865 |
| cosine_accuracy@3 | 0.9323 |
| cosine_accuracy@5 | 0.9635 |
| cosine_accuracy@10 | 0.9635 |
| cosine_precision@1 | 0.7865 |
| cosine_precision@3 | 0.3108 |
| cosine_precision@5 | 0.1927 |
| cosine_precision@10 | 0.0964 |
| cosine_recall@1 | 0.7865 |
| cosine_recall@3 | 0.9323 |
| cosine_recall@5 | 0.9635 |
| cosine_recall@10 | 0.9635 |
| cosine_ndcg@10 | 0.8881 |
| cosine_mrr@10 | 0.8623 |
| **cosine_map@100** | **0.8647** |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.6875 |
| cosine_accuracy@3 | 0.8646 |
| cosine_accuracy@5 | 0.9271 |
| cosine_accuracy@10 | 0.9688 |
| cosine_precision@1 | 0.6875 |
| cosine_precision@3 | 0.2882 |
| cosine_precision@5 | 0.1854 |
| cosine_precision@10 | 0.0969 |
| cosine_recall@1 | 0.6875 |
| cosine_recall@3 | 0.8646 |
| cosine_recall@5 | 0.9271 |
| cosine_recall@10 | 0.9688 |
| cosine_ndcg@10 | 0.8336 |
| cosine_mrr@10 | 0.7896 |
| **cosine_map@100** | **0.7918** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 5
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `load_best_model_at_end`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
|:-------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|
| 0.3846 | 5 | 5.0472 | - | - | - | - | - |
| 0.7692 | 10 | 4.0023 | - | - | - | - | - |
| 1.0 | 13 | - | 0.7939 | 0.8135 | 0.8282 | 0.7207 | 0.8323 |
| 1.1538 | 15 | 2.3381 | - | - | - | - | - |
| 1.5385 | 20 | 3.4302 | - | - | - | - | - |
| 1.9231 | 25 | 2.08 | - | - | - | - | - |
| 2.0 | 26 | - | 0.8494 | 0.8681 | 0.8781 | 0.7959 | 0.8888 |
| 2.3077 | 30 | 1.4696 | - | - | - | - | - |
| 2.6923 | 35 | 1.8153 | - | - | - | - | - |
| **3.0** | **39** | **-** | **0.8641** | **0.8844** | **0.8924** | **0.7952** | **0.8997** |
| 3.0769 | 40 | 1.3498 | - | - | - | - | - |
| 3.4615 | 45 | 0.9135 | - | - | - | - | - |
| 3.8462 | 50 | 1.3996 | - | - | - | - | - |
| 4.0 | 52 | - | 0.8647 | 0.8775 | 0.8819 | 0.7896 | 0.8990 |
| 4.2308 | 55 | 1.1582 | - | - | - | - | - |
| 4.6154 | 60 | 1.2233 | - | - | - | - | - |
| 5.0 | 65 | 0.9757 | 0.8647 | 0.8758 | 0.8810 | 0.7918 | 0.8981 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.42.4
- PyTorch: 2.3.1+cu121
- Accelerate: 0.32.1
- Datasets: 2.21.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# BGE base Financial Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("joshuapb/fine-tuned-matryoshka-100")
# Run inference
sentences = [
'Fine-tuning New Knowledge#\nFine-tuning a pre-trained LLM via supervised fine-tuning and RLHF is a common technique for improving certain capabilities of the model like instruction following. Introducing new knowledge at the fine-tuning stage is hard to avoid.\nFine-tuning usually consumes much less compute, making it debatable whether the model can reliably learn new knowledge via small-scale fine-tuning. Gekhman et al. 2024 studied the research question of whether fine-tuning LLMs on new knowledge encourages hallucinations. They found that (1) LLMs learn fine-tuning examples with new knowledge slower than other examples with knowledge consistent with the pre-existing knowledge of the model; (2) Once the examples with new knowledge are eventually learned, they increase the model’s tendency to hallucinate.',
'How do the results presented by Gekhman et al. in their 2024 study inform our understanding of the reliability metrics associated with large language models (LLMs) when subjected to fine-tuning with novel datasets?',
'What is the relationship between the calibration of AI models and the effectiveness of verbalized probabilities when applied to tasks of varying difficulty levels?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.8281 |
| cosine_accuracy@3 | 0.9635 |
| cosine_accuracy@5 | 0.974 |
| cosine_accuracy@10 | 0.9948 |
| cosine_precision@1 | 0.8281 |
| cosine_precision@3 | 0.3212 |
| cosine_precision@5 | 0.1948 |
| cosine_precision@10 | 0.0995 |
| cosine_recall@1 | 0.8281 |
| cosine_recall@3 | 0.9635 |
| cosine_recall@5 | 0.974 |
| cosine_recall@10 | 0.9948 |
| cosine_ndcg@10 | 0.922 |
| cosine_mrr@10 | 0.8977 |
| **cosine_map@100** | **0.8981** |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:----------|
| cosine_accuracy@1 | 0.8021 |
| cosine_accuracy@3 | 0.9635 |
| cosine_accuracy@5 | 0.974 |
| cosine_accuracy@10 | 0.9896 |
| cosine_precision@1 | 0.8021 |
| cosine_precision@3 | 0.3212 |
| cosine_precision@5 | 0.1948 |
| cosine_precision@10 | 0.099 |
| cosine_recall@1 | 0.8021 |
| cosine_recall@3 | 0.9635 |
| cosine_recall@5 | 0.974 |
| cosine_recall@10 | 0.9896 |
| cosine_ndcg@10 | 0.9077 |
| cosine_mrr@10 | 0.8802 |
| **cosine_map@100** | **0.881** |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7969 |
| cosine_accuracy@3 | 0.9583 |
| cosine_accuracy@5 | 0.9688 |
| cosine_accuracy@10 | 0.9792 |
| cosine_precision@1 | 0.7969 |
| cosine_precision@3 | 0.3194 |
| cosine_precision@5 | 0.1937 |
| cosine_precision@10 | 0.0979 |
| cosine_recall@1 | 0.7969 |
| cosine_recall@3 | 0.9583 |
| cosine_recall@5 | 0.9688 |
| cosine_recall@10 | 0.9792 |
| cosine_ndcg@10 | 0.9011 |
| cosine_mrr@10 | 0.8746 |
| **cosine_map@100** | **0.8758** |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7865 |
| cosine_accuracy@3 | 0.9323 |
| cosine_accuracy@5 | 0.9635 |
| cosine_accuracy@10 | 0.9635 |
| cosine_precision@1 | 0.7865 |
| cosine_precision@3 | 0.3108 |
| cosine_precision@5 | 0.1927 |
| cosine_precision@10 | 0.0964 |
| cosine_recall@1 | 0.7865 |
| cosine_recall@3 | 0.9323 |
| cosine_recall@5 | 0.9635 |
| cosine_recall@10 | 0.9635 |
| cosine_ndcg@10 | 0.8881 |
| cosine_mrr@10 | 0.8623 |
| **cosine_map@100** | **0.8647** |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.6875 |
| cosine_accuracy@3 | 0.8646 |
| cosine_accuracy@5 | 0.9271 |
| cosine_accuracy@10 | 0.9688 |
| cosine_precision@1 | 0.6875 |
| cosine_precision@3 | 0.2882 |
| cosine_precision@5 | 0.1854 |
| cosine_precision@10 | 0.0969 |
| cosine_recall@1 | 0.6875 |
| cosine_recall@3 | 0.8646 |
| cosine_recall@5 | 0.9271 |
| cosine_recall@10 | 0.9688 |
| cosine_ndcg@10 | 0.8336 |
| cosine_mrr@10 | 0.7896 |
| **cosine_map@100** | **0.7918** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 5
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `load_best_model_at_end`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
|:-------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|
| 0.3846 | 5 | 5.0472 | - | - | - | - | - |
| 0.7692 | 10 | 4.0023 | - | - | - | - | - |
| 1.0 | 13 | - | 0.7939 | 0.8135 | 0.8282 | 0.7207 | 0.8323 |
| 1.1538 | 15 | 2.3381 | - | - | - | - | - |
| 1.5385 | 20 | 3.4302 | - | - | - | - | - |
| 1.9231 | 25 | 2.08 | - | - | - | - | - |
| 2.0 | 26 | - | 0.8494 | 0.8681 | 0.8781 | 0.7959 | 0.8888 |
| 2.3077 | 30 | 1.4696 | - | - | - | - | - |
| 2.6923 | 35 | 1.8153 | - | - | - | - | - |
| **3.0** | **39** | **-** | **0.8641** | **0.8844** | **0.8924** | **0.7952** | **0.8997** |
| 3.0769 | 40 | 1.3498 | - | - | - | - | - |
| 3.4615 | 45 | 0.9135 | - | - | - | - | - |
| 3.8462 | 50 | 1.3996 | - | - | - | - | - |
| 4.0 | 52 | - | 0.8647 | 0.8775 | 0.8819 | 0.7896 | 0.8990 |
| 4.2308 | 55 | 1.1582 | - | - | - | - | - |
| 4.6154 | 60 | 1.2233 | - | - | - | - | - |
| 5.0 | 65 | 0.9757 | 0.8647 | 0.8758 | 0.8810 | 0.7918 | 0.8981 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.42.4
- PyTorch: 2.3.1+cu121
- Accelerate: 0.32.1
- Datasets: 2.21.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "BAAI/bge-base-en-v1.5", "datasets": [], "language": ["en"], "library_name": "sentence-transformers", "license": "apache-2.0", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_recall@1", "cosine_recall@3", "cosine_recall@5", "cosine_recall@10", "cosine_ndcg@10", "cosine_mrr@10", "cosine_map@100"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:100", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "Fig. 8. The accuracy of instruct-GPT series models of different sizes (left to right, small to large). Larger model doing better on binary classification of answerable and unanswerable questions in SelfAware eval. (Image source: Yin et al. 2023)\nAnother way to assess the model’s awareness of unknown knowledge is to measure the model’s output uncertainty. When a question is in-between known and unknown, the model is expected to demonstrate the right level of confidence.\nThe experiment by Kadavath et al. (2022) showed that LLMs are shown to be well calibrated in their estimation probabilities of answer correctness on diverse multiple choice questions in a format with visible lettered answer options (MMLU, TruthfulQA, QuALITY, LogiQA), meaning that the predicted probability coincides with the frequency of that answer being true. RLHF fine-tuning makes the model poorly calibrated, but higher sampling temperature leads to better calibration results.", "sentences": ["What effect does the slower acquisition of new knowledge compared to established knowledge have on the effectiveness of large language models in practical scenarios?", "How do discrepancies identified during the final output review phase affect the overall quality of the generated responses?", "What effect does reinforcement learning from human feedback (RLHF) fine-tuning have on how well large language models assess the accuracy of their answers?"]}, {"source_sentence": "Fig. 1. Knowledge categorization of close-book QA examples based on how likely the model outputs correct answers. (Image source: Gekhman et al. 2024)\nSome interesting observations of the experiments, where dev set accuracy is considered a proxy for hallucinations.\n\nUnknown examples are fitted substantially slower than Known.\nThe best dev performance is obtained when the LLM fits the majority of the Known training examples but only a few of the Unknown ones. The model starts to hallucinate when it learns most of the Unknown examples.\nAmong Known examples, MaybeKnown cases result in better overall performance, more essential than HighlyKnown ones.", "sentences": ["What is the relationship between the structural formatting of inquiries and the occurrence of calibration errors in artificial intelligence models, and in what ways can this understanding contribute to the optimization of model training processes?", "What are the benefits of integrating a pretrained Natural Language Inference (NLI) model with MPNet when assessing the reliability of reasoning paths in knowledge retrieval?", "In what ways do the classifications of Known versus Unknown examples influence the propensity of AI models to generate hallucinations during their training processes?"]}, {"source_sentence": "Fig. 3. The evaluation framework for the FactualityPrompt benchmark.(Image source: Lee, et al. 2022)\nGiven the model continuation and paired Wikipedia text, two evaluation metrics for hallucination are considered:\n\nHallucination NE (Named Entity) errors: Using a pretrained entity detection model and document-level grounding, this metric measures the fraction of detected named entities that do not appear in the ground truth document.\nEntailment ratios: Using a RoBERTa model fine-tuned on MNLI and sentence-level knowledge grounding, this metric calculates the fraction of generated sentences that are marked as relevant to the paired Wikipedia sentence by the entailment model.", "sentences": ["What impact does the implementation of a pretrained query-document relevance model have on the process of document selection in research methodologies?", "In what ways does the sequence in which information is delivered in AI-generated responses influence the likelihood of generating inaccuracies or hallucinations?", "In what ways does the FactualityPrompt benchmark assess the performance of named entity detection models, particularly in relation to errors arising from hallucinated named entities?"]}, {"source_sentence": "Fig. 1. Knowledge categorization of close-book QA examples based on how likely the model outputs correct answers. (Image source: Gekhman et al. 2024)\nSome interesting observations of the experiments, where dev set accuracy is considered a proxy for hallucinations.\n\nUnknown examples are fitted substantially slower than Known.\nThe best dev performance is obtained when the LLM fits the majority of the Known training examples but only a few of the Unknown ones. The model starts to hallucinate when it learns most of the Unknown examples.\nAmong Known examples, MaybeKnown cases result in better overall performance, more essential than HighlyKnown ones.", "sentences": ["In what ways does the inherently adversarial structure of TruthfulQA inquiries facilitate the detection of prevalent fallacies in human cognitive processes, and what implications does this have for understanding the constraints of expansive language models?", "In what ways do MaybeKnown cases influence the performance of a model when contrasted with HighlyKnown examples, particularly in relation to the occurrence of hallucinations?", "In what ways does the Self-RAG framework leverage reflection tokens to enhance the quality of its generated outputs, and what implications does this have for the overall generation process?"]}, {"source_sentence": "Fine-tuning New Knowledge#\nFine-tuning a pre-trained LLM via supervised fine-tuning and RLHF is a common technique for improving certain capabilities of the model like instruction following. Introducing new knowledge at the fine-tuning stage is hard to avoid.\nFine-tuning usually consumes much less compute, making it debatable whether the model can reliably learn new knowledge via small-scale fine-tuning. Gekhman et al. 2024 studied the research question of whether fine-tuning LLMs on new knowledge encourages hallucinations. They found that (1) LLMs learn fine-tuning examples with new knowledge slower than other examples with knowledge consistent with the pre-existing knowledge of the model; (2) Once the examples with new knowledge are eventually learned, they increase the model’s tendency to hallucinate.", "sentences": ["How does the IsRel token function in the retrieval process, and what impact does it have on the relevance of generated content to reduce hallucination?", "What is the relationship between the calibration of AI models and the effectiveness of verbalized probabilities when applied to tasks of varying difficulty levels?", "How do the results presented by Gekhman et al. in their 2024 study inform our understanding of the reliability metrics associated with large language models (LLMs) when subjected to fine-tuning with novel datasets?"]}], "model-index": [{"name": "BGE base Financial Matryoshka", "results": [{"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 768", "type": "dim_768"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.828125, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.9635416666666666, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.9739583333333334, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.9947916666666666, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.828125, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.3211805555555556, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.1947916666666666, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09947916666666667, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.828125, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.9635416666666666, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.9739583333333334, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.9947916666666666, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.9220150687007592, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.8976707175925925, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.8981047453703703, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 512", "type": "dim_512"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.8020833333333334, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.9635416666666666, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.9739583333333334, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.9895833333333334, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.8020833333333334, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.3211805555555556, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.1947916666666666, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09895833333333333, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.8020833333333334, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.9635416666666666, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.9739583333333334, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.9895833333333334, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.9077325270335209, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.880220734126984, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.8810414411976911, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 256", "type": "dim_256"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.796875, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.9583333333333334, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.96875, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.9791666666666666, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.796875, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.3194444444444445, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.19374999999999998, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09791666666666665, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.796875, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.9583333333333334, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.96875, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.9791666666666666, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.9011377823848584, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.8746155753968253, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.8757564484126984, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 128", "type": "dim_128"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.7864583333333334, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.9322916666666666, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.9635416666666666, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.9635416666666666, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.7864583333333334, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.3107638888888889, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.19270833333333334, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09635416666666667, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.7864583333333334, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.9322916666666666, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.9635416666666666, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.9635416666666666, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.888061438431803, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.8623263888888889, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.8647421480429293, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 64", "type": "dim_64"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.6875, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.8645833333333334, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.9270833333333334, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.96875, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.6875, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.2881944444444445, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.18541666666666665, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09687499999999999, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.6875, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.8645833333333334, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.9270833333333334, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.96875, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.8335872598831777, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.7895895337301586, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.7917890681938919, "name": "Cosine Map@100"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,251 |
luis-cardoso-q/kotodama-multilingual-v3
|
luis-cardoso-q
|
text-classification
|
[
"setfit",
"safetensors",
"xlm-roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"model-index",
"region:us"
] | 2024-02-29T19:57:24Z |
2024-02-29T20:14:23+00:00
| 6 | 0 |
---
base_model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: 'loan repayment '
- text: 2023-F48
- text: 'acompte '
- text: 2023-12-1165548
- text: Facture 20230040
inference: true
model-index:
- name: SetFit with sentence-transformers/paraphrase-multilingual-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.73568281938326
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-multilingual-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 128 tokens
- **Number of Classes:** 7 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.7357 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("luis-cardoso-q/kotodama-multilingual-v3")
# Run inference
preds = model("2023-F48")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 1 | 2.6689 | 16 |
| Label | Training Sample Count |
|:------------------|:----------------------|
| buying | 25 |
| company name | 73 |
| invoice | 128 |
| random characters | 128 |
| refund | 87 |
| rent | 38 |
| salary | 128 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:---------:|:-------------:|:---------------:|
| 0.0001 | 1 | 0.2604 | - |
| 0.0026 | 50 | 0.3244 | - |
| 0.0053 | 100 | 0.2233 | - |
| 0.0079 | 150 | 0.2034 | - |
| 0.0105 | 200 | 0.2998 | - |
| 0.0131 | 250 | 0.2074 | - |
| 0.0158 | 300 | 0.1682 | - |
| 0.0184 | 350 | 0.1815 | - |
| 0.0210 | 400 | 0.155 | - |
| 0.0237 | 450 | 0.16 | - |
| 0.0263 | 500 | 0.117 | - |
| 0.0289 | 550 | 0.1685 | - |
| 0.0315 | 600 | 0.0348 | - |
| 0.0342 | 650 | 0.0912 | - |
| 0.0368 | 700 | 0.0217 | - |
| 0.0394 | 750 | 0.0417 | - |
| 0.0421 | 800 | 0.0592 | - |
| 0.0447 | 850 | 0.047 | - |
| 0.0473 | 900 | 0.0914 | - |
| 0.0499 | 950 | 0.0116 | - |
| 0.0526 | 1000 | 0.022 | - |
| 0.0552 | 1050 | 0.0018 | - |
| 0.0578 | 1100 | 0.0159 | - |
| 0.0605 | 1150 | 0.0097 | - |
| 0.0631 | 1200 | 0.066 | - |
| 0.0657 | 1250 | 0.0027 | - |
| 0.0683 | 1300 | 0.003 | - |
| 0.0710 | 1350 | 0.0146 | - |
| 0.0736 | 1400 | 0.009 | - |
| 0.0762 | 1450 | 0.0016 | - |
| 0.0789 | 1500 | 0.001 | - |
| 0.0815 | 1550 | 0.019 | - |
| 0.0841 | 1600 | 0.0015 | - |
| 0.0867 | 1650 | 0.0003 | - |
| 0.0894 | 1700 | 0.0929 | - |
| 0.0920 | 1750 | 0.013 | - |
| 0.0946 | 1800 | 0.0007 | - |
| 0.0973 | 1850 | 0.0413 | - |
| 0.0999 | 1900 | 0.0922 | - |
| 0.1025 | 1950 | 0.0009 | - |
| 0.1051 | 2000 | 0.001 | - |
| 0.1078 | 2050 | 0.0007 | - |
| 0.1104 | 2100 | 0.0086 | - |
| 0.1130 | 2150 | 0.0017 | - |
| 0.1157 | 2200 | 0.0048 | - |
| 0.1183 | 2250 | 0.0002 | - |
| 0.1209 | 2300 | 0.0518 | - |
| 0.1235 | 2350 | 0.0271 | - |
| 0.1262 | 2400 | 0.0138 | - |
| 0.1288 | 2450 | 0.0136 | - |
| 0.1314 | 2500 | 0.0444 | - |
| 0.1341 | 2550 | 0.0096 | - |
| 0.1367 | 2600 | 0.0064 | - |
| 0.1393 | 2650 | 0.0092 | - |
| 0.1419 | 2700 | 0.0012 | - |
| 0.1446 | 2750 | 0.0044 | - |
| 0.1472 | 2800 | 0.0121 | - |
| 0.1498 | 2850 | 0.0004 | - |
| 0.1525 | 2900 | 0.0002 | - |
| 0.1551 | 2950 | 0.0008 | - |
| 0.1577 | 3000 | 0.0034 | - |
| 0.1603 | 3050 | 0.0002 | - |
| 0.1630 | 3100 | 0.0152 | - |
| 0.1656 | 3150 | 0.0195 | - |
| 0.1682 | 3200 | 0.0005 | - |
| 0.1709 | 3250 | 0.0002 | - |
| 0.1735 | 3300 | 0.0343 | - |
| 0.1761 | 3350 | 0.0095 | - |
| 0.1787 | 3400 | 0.0354 | - |
| 0.1814 | 3450 | 0.0085 | - |
| 0.1840 | 3500 | 0.001 | - |
| 0.1866 | 3550 | 0.0194 | - |
| 0.1893 | 3600 | 0.017 | - |
| 0.1919 | 3650 | 0.0003 | - |
| 0.1945 | 3700 | 0.0024 | - |
| 0.1972 | 3750 | 0.06 | - |
| 0.1998 | 3800 | 0.0006 | - |
| 0.2024 | 3850 | 0.0003 | - |
| 0.2050 | 3900 | 0.0311 | - |
| 0.2077 | 3950 | 0.023 | - |
| 0.2103 | 4000 | 0.0039 | - |
| 0.2129 | 4050 | 0.0085 | - |
| 0.2156 | 4100 | 0.0036 | - |
| 0.2182 | 4150 | 0.0015 | - |
| 0.2208 | 4200 | 0.0584 | - |
| 0.2234 | 4250 | 0.0004 | - |
| 0.2261 | 4300 | 0.0082 | - |
| 0.2287 | 4350 | 0.0001 | - |
| 0.2313 | 4400 | 0.0044 | - |
| 0.2340 | 4450 | 0.0003 | - |
| 0.2366 | 4500 | 0.0495 | - |
| 0.2392 | 4550 | 0.0073 | - |
| 0.2418 | 4600 | 0.0152 | - |
| 0.2445 | 4650 | 0.0033 | - |
| 0.2471 | 4700 | 0.0005 | - |
| 0.2497 | 4750 | 0.0102 | - |
| 0.2524 | 4800 | 0.046 | - |
| 0.2550 | 4850 | 0.0028 | - |
| 0.2576 | 4900 | 0.0014 | - |
| 0.2602 | 4950 | 0.0118 | - |
| 0.2629 | 5000 | 0.0042 | - |
| 0.2655 | 5050 | 0.0005 | - |
| 0.2681 | 5100 | 0.0031 | - |
| 0.2708 | 5150 | 0.0002 | - |
| 0.2734 | 5200 | 0.002 | - |
| 0.2760 | 5250 | 0.0111 | - |
| 0.2786 | 5300 | 0.0286 | - |
| 0.2813 | 5350 | 0.0009 | - |
| 0.2839 | 5400 | 0.0023 | - |
| 0.2865 | 5450 | 0.0079 | - |
| 0.2892 | 5500 | 0.0691 | - |
| 0.2918 | 5550 | 0.0403 | - |
| 0.2944 | 5600 | 0.0002 | - |
| 0.2970 | 5650 | 0.0057 | - |
| 0.2997 | 5700 | 0.0047 | - |
| 0.3023 | 5750 | 0.0322 | - |
| 0.3049 | 5800 | 0.0097 | - |
| 0.3076 | 5850 | 0.0012 | - |
| 0.3102 | 5900 | 0.0047 | - |
| 0.3128 | 5950 | 0.0925 | - |
| 0.3154 | 6000 | 0.0562 | - |
| 0.3181 | 6050 | 0.0058 | - |
| 0.3207 | 6100 | 0.0001 | - |
| 0.3233 | 6150 | 0.0029 | - |
| 0.3260 | 6200 | 0.0001 | - |
| 0.3286 | 6250 | 0.0035 | - |
| 0.3312 | 6300 | 0.0013 | - |
| 0.3338 | 6350 | 0.0152 | - |
| 0.3365 | 6400 | 0.0004 | - |
| 0.3391 | 6450 | 0.0114 | - |
| 0.3417 | 6500 | 0.0906 | - |
| 0.3444 | 6550 | 0.0005 | - |
| 0.3470 | 6600 | 0.0028 | - |
| 0.3496 | 6650 | 0.0395 | - |
| 0.3522 | 6700 | 0.0001 | - |
| 0.3549 | 6750 | 0.0044 | - |
| 0.3575 | 6800 | 0.0121 | - |
| 0.3601 | 6850 | 0.0012 | - |
| 0.3628 | 6900 | 0.0193 | - |
| 0.3654 | 6950 | 0.0014 | - |
| 0.3680 | 7000 | 0.0001 | - |
| 0.3706 | 7050 | 0.0618 | - |
| 0.3733 | 7100 | 0.0066 | - |
| 0.3759 | 7150 | 0.0426 | - |
| 0.3785 | 7200 | 0.0281 | - |
| 0.3812 | 7250 | 0.0254 | - |
| 0.3838 | 7300 | 0.0008 | - |
| 0.3864 | 7350 | 0.0047 | - |
| 0.3890 | 7400 | 0.0088 | - |
| 0.3917 | 7450 | 0.0004 | - |
| 0.3943 | 7500 | 0.0054 | - |
| 0.3969 | 7550 | 0.0371 | - |
| 0.3996 | 7600 | 0.0001 | - |
| 0.4022 | 7650 | 0.0082 | - |
| 0.4048 | 7700 | 0.0162 | - |
| 0.4074 | 7750 | 0.0093 | - |
| 0.4101 | 7800 | 0.0115 | - |
| 0.4127 | 7850 | 0.0114 | - |
| 0.4153 | 7900 | 0.0001 | - |
| 0.4180 | 7950 | 0.0002 | - |
| 0.4206 | 8000 | 0.0098 | - |
| 0.4232 | 8050 | 0.0001 | - |
| 0.4258 | 8100 | 0.0 | - |
| 0.4285 | 8150 | 0.0104 | - |
| 0.4311 | 8200 | 0.0564 | - |
| 0.4337 | 8250 | 0.0002 | - |
| 0.4364 | 8300 | 0.0176 | - |
| 0.4390 | 8350 | 0.0109 | - |
| 0.4416 | 8400 | 0.0001 | - |
| 0.4442 | 8450 | 0.0053 | - |
| 0.4469 | 8500 | 0.0629 | - |
| 0.4495 | 8550 | 0.0324 | - |
| 0.4521 | 8600 | 0.0003 | - |
| 0.4548 | 8650 | 0.0025 | - |
| 0.4574 | 8700 | 0.0032 | - |
| 0.4600 | 8750 | 0.0002 | - |
| 0.4626 | 8800 | 0.0001 | - |
| 0.4653 | 8850 | 0.0475 | - |
| 0.4679 | 8900 | 0.0114 | - |
| 0.4705 | 8950 | 0.0001 | - |
| 0.4732 | 9000 | 0.0028 | - |
| 0.4758 | 9050 | 0.0001 | - |
| 0.4784 | 9100 | 0.0002 | - |
| 0.4810 | 9150 | 0.0001 | - |
| 0.4837 | 9200 | 0.0001 | - |
| 0.4863 | 9250 | 0.0021 | - |
| 0.4889 | 9300 | 0.0001 | - |
| 0.4916 | 9350 | 0.0014 | - |
| 0.4942 | 9400 | 0.0176 | - |
| 0.4968 | 9450 | 0.0005 | - |
| 0.4994 | 9500 | 0.0001 | - |
| 0.5021 | 9550 | 0.0314 | - |
| 0.5047 | 9600 | 0.0613 | - |
| 0.5073 | 9650 | 0.018 | - |
| 0.5100 | 9700 | 0.0 | - |
| 0.5126 | 9750 | 0.0023 | - |
| 0.5152 | 9800 | 0.0013 | - |
| 0.5178 | 9850 | 0.0001 | - |
| 0.5205 | 9900 | 0.0003 | - |
| 0.5231 | 9950 | 0.001 | - |
| 0.5257 | 10000 | 0.0001 | - |
| 0.5284 | 10050 | 0.0193 | - |
| 0.5310 | 10100 | 0.0051 | - |
| 0.5336 | 10150 | 0.0001 | - |
| 0.5362 | 10200 | 0.0005 | - |
| 0.5389 | 10250 | 0.0 | - |
| 0.5415 | 10300 | 0.0001 | - |
| 0.5441 | 10350 | 0.0001 | - |
| 0.5468 | 10400 | 0.0037 | - |
| 0.5494 | 10450 | 0.0309 | - |
| 0.5520 | 10500 | 0.0286 | - |
| 0.5547 | 10550 | 0.0 | - |
| 0.5573 | 10600 | 0.0155 | - |
| 0.5599 | 10650 | 0.0001 | - |
| 0.5625 | 10700 | 0.0077 | - |
| 0.5652 | 10750 | 0.0153 | - |
| 0.5678 | 10800 | 0.0042 | - |
| 0.5704 | 10850 | 0.0103 | - |
| 0.5731 | 10900 | 0.0097 | - |
| 0.5757 | 10950 | 0.0109 | - |
| 0.5783 | 11000 | 0.0001 | - |
| 0.5809 | 11050 | 0.0103 | - |
| 0.5836 | 11100 | 0.0024 | - |
| 0.5862 | 11150 | 0.0001 | - |
| 0.5888 | 11200 | 0.0487 | - |
| 0.5915 | 11250 | 0.0009 | - |
| 0.5941 | 11300 | 0.0001 | - |
| 0.5967 | 11350 | 0.0002 | - |
| 0.5993 | 11400 | 0.0035 | - |
| 0.6020 | 11450 | 0.0005 | - |
| 0.6046 | 11500 | 0.0001 | - |
| 0.6072 | 11550 | 0.0049 | - |
| 0.6099 | 11600 | 0.0396 | - |
| 0.6125 | 11650 | 0.0177 | - |
| 0.6151 | 11700 | 0.0071 | - |
| 0.6177 | 11750 | 0.0071 | - |
| 0.6204 | 11800 | 0.0111 | - |
| 0.6230 | 11850 | 0.0145 | - |
| 0.6256 | 11900 | 0.037 | - |
| 0.6283 | 11950 | 0.0046 | - |
| 0.6309 | 12000 | 0.0258 | - |
| 0.6335 | 12050 | 0.0002 | - |
| 0.6361 | 12100 | 0.002 | - |
| 0.6388 | 12150 | 0.0119 | - |
| 0.6414 | 12200 | 0.0079 | - |
| 0.6440 | 12250 | 0.0239 | - |
| 0.6467 | 12300 | 0.0037 | - |
| 0.6493 | 12350 | 0.0366 | - |
| 0.6519 | 12400 | 0.0201 | - |
| 0.6545 | 12450 | 0.002 | - |
| 0.6572 | 12500 | 0.0652 | - |
| 0.6598 | 12550 | 0.005 | - |
| 0.6624 | 12600 | 0.0034 | - |
| 0.6651 | 12650 | 0.0003 | - |
| 0.6677 | 12700 | 0.0022 | - |
| 0.6703 | 12750 | 0.0001 | - |
| 0.6729 | 12800 | 0.0175 | - |
| 0.6756 | 12850 | 0.0003 | - |
| 0.6782 | 12900 | 0.0085 | - |
| 0.6808 | 12950 | 0.0036 | - |
| 0.6835 | 13000 | 0.0 | - |
| 0.6861 | 13050 | 0.0097 | - |
| 0.6887 | 13100 | 0.006 | - |
| 0.6913 | 13150 | 0.0001 | - |
| 0.6940 | 13200 | 0.0001 | - |
| 0.6966 | 13250 | 0.0379 | - |
| 0.6992 | 13300 | 0.0076 | - |
| 0.7019 | 13350 | 0.0627 | - |
| 0.7045 | 13400 | 0.0605 | - |
| 0.7071 | 13450 | 0.0081 | - |
| 0.7097 | 13500 | 0.0018 | - |
| 0.7124 | 13550 | 0.018 | - |
| 0.7150 | 13600 | 0.0035 | - |
| 0.7176 | 13650 | 0.0001 | - |
| 0.7203 | 13700 | 0.0001 | - |
| 0.7229 | 13750 | 0.0507 | - |
| 0.7255 | 13800 | 0.0082 | - |
| 0.7281 | 13850 | 0.0082 | - |
| 0.7308 | 13900 | 0.0106 | - |
| 0.7334 | 13950 | 0.0067 | - |
| 0.7360 | 14000 | 0.0062 | - |
| 0.7387 | 14050 | 0.0001 | - |
| 0.7413 | 14100 | 0.0246 | - |
| 0.7439 | 14150 | 0.0033 | - |
| 0.7465 | 14200 | 0.0001 | - |
| 0.7492 | 14250 | 0.0432 | - |
| 0.7518 | 14300 | 0.0502 | - |
| 0.7544 | 14350 | 0.0079 | - |
| 0.7571 | 14400 | 0.0291 | - |
| 0.7597 | 14450 | 0.0002 | - |
| 0.7623 | 14500 | 0.0029 | - |
| 0.7649 | 14550 | 0.0321 | - |
| 0.7676 | 14600 | 0.0002 | - |
| 0.7702 | 14650 | 0.0053 | - |
| 0.7728 | 14700 | 0.0094 | - |
| 0.7755 | 14750 | 0.0156 | - |
| 0.7781 | 14800 | 0.071 | - |
| 0.7807 | 14850 | 0.0001 | - |
| 0.7833 | 14900 | 0.0037 | - |
| 0.7860 | 14950 | 0.0544 | - |
| 0.7886 | 15000 | 0.0034 | - |
| 0.7912 | 15050 | 0.0018 | - |
| 0.7939 | 15100 | 0.0014 | - |
| 0.7965 | 15150 | 0.0189 | - |
| 0.7991 | 15200 | 0.0001 | - |
| 0.8017 | 15250 | 0.0057 | - |
| 0.8044 | 15300 | 0.0001 | - |
| 0.8070 | 15350 | 0.0002 | - |
| 0.8096 | 15400 | 0.0003 | - |
| 0.8123 | 15450 | 0.0006 | - |
| 0.8149 | 15500 | 0.1085 | - |
| 0.8175 | 15550 | 0.0003 | - |
| 0.8201 | 15600 | 0.0001 | - |
| 0.8228 | 15650 | 0.0005 | - |
| 0.8254 | 15700 | 0.014 | - |
| 0.8280 | 15750 | 0.0036 | - |
| 0.8307 | 15800 | 0.0001 | - |
| 0.8333 | 15850 | 0.0 | - |
| 0.8359 | 15900 | 0.0 | - |
| 0.8385 | 15950 | 0.0001 | - |
| 0.8412 | 16000 | 0.0001 | - |
| 0.8438 | 16050 | 0.0271 | - |
| 0.8464 | 16100 | 0.0093 | - |
| 0.8491 | 16150 | 0.0444 | - |
| 0.8517 | 16200 | 0.0002 | - |
| 0.8543 | 16250 | 0.0007 | - |
| 0.8569 | 16300 | 0.0002 | - |
| 0.8596 | 16350 | 0.0012 | - |
| 0.8622 | 16400 | 0.0 | - |
| 0.8648 | 16450 | 0.0177 | - |
| 0.8675 | 16500 | 0.0342 | - |
| 0.8701 | 16550 | 0.0288 | - |
| 0.8727 | 16600 | 0.0 | - |
| 0.8753 | 16650 | 0.0024 | - |
| 0.8780 | 16700 | 0.0003 | - |
| 0.8806 | 16750 | 0.0063 | - |
| 0.8832 | 16800 | 0.0442 | - |
| 0.8859 | 16850 | 0.0092 | - |
| 0.8885 | 16900 | 0.0089 | - |
| 0.8911 | 16950 | 0.0027 | - |
| 0.8937 | 17000 | 0.0521 | - |
| 0.8964 | 17050 | 0.0023 | - |
| 0.8990 | 17100 | 0.051 | - |
| 0.9016 | 17150 | 0.0015 | - |
| 0.9043 | 17200 | 0.0003 | - |
| 0.9069 | 17250 | 0.0177 | - |
| 0.9095 | 17300 | 0.0031 | - |
| 0.9121 | 17350 | 0.0205 | - |
| 0.9148 | 17400 | 0.0172 | - |
| 0.9174 | 17450 | 0.0001 | - |
| 0.9200 | 17500 | 0.005 | - |
| 0.9227 | 17550 | 0.0409 | - |
| 0.9253 | 17600 | 0.0001 | - |
| 0.9279 | 17650 | 0.0 | - |
| 0.9306 | 17700 | 0.0002 | - |
| 0.9332 | 17750 | 0.0274 | - |
| 0.9358 | 17800 | 0.0077 | - |
| 0.9384 | 17850 | 0.0078 | - |
| 0.9411 | 17900 | 0.0001 | - |
| 0.9437 | 17950 | 0.0 | - |
| 0.9463 | 18000 | 0.0437 | - |
| 0.9490 | 18050 | 0.0143 | - |
| 0.9516 | 18100 | 0.001 | - |
| 0.9542 | 18150 | 0.0001 | - |
| 0.9568 | 18200 | 0.0428 | - |
| 0.9595 | 18250 | 0.0036 | - |
| 0.9621 | 18300 | 0.0001 | - |
| 0.9647 | 18350 | 0.0001 | - |
| 0.9674 | 18400 | 0.0063 | - |
| 0.9700 | 18450 | 0.0 | - |
| 0.9726 | 18500 | 0.0196 | - |
| 0.9752 | 18550 | 0.0001 | - |
| 0.9779 | 18600 | 0.0001 | - |
| 0.9805 | 18650 | 0.0001 | - |
| 0.9831 | 18700 | 0.0397 | - |
| 0.9858 | 18750 | 0.008 | - |
| 0.9884 | 18800 | 0.015 | - |
| 0.9910 | 18850 | 0.0 | - |
| 0.9936 | 18900 | 0.003 | - |
| 0.9963 | 18950 | 0.025 | - |
| 0.9989 | 19000 | 0.003 | - |
| **1.0** | **19021** | **-** | **0.2343** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.4.0
- Transformers: 4.38.1
- PyTorch: 2.1.0+cu118
- Datasets: 2.17.1
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SetFit with sentence-transformers/paraphrase-multilingual-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 128 tokens
- **Number of Classes:** 7 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.7357 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("luis-cardoso-q/kotodama-multilingual-v3")
# Run inference
preds = model("2023-F48")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 1 | 2.6689 | 16 |
| Label | Training Sample Count |
|:------------------|:----------------------|
| buying | 25 |
| company name | 73 |
| invoice | 128 |
| random characters | 128 |
| refund | 87 |
| rent | 38 |
| salary | 128 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:---------:|:-------------:|:---------------:|
| 0.0001 | 1 | 0.2604 | - |
| 0.0026 | 50 | 0.3244 | - |
| 0.0053 | 100 | 0.2233 | - |
| 0.0079 | 150 | 0.2034 | - |
| 0.0105 | 200 | 0.2998 | - |
| 0.0131 | 250 | 0.2074 | - |
| 0.0158 | 300 | 0.1682 | - |
| 0.0184 | 350 | 0.1815 | - |
| 0.0210 | 400 | 0.155 | - |
| 0.0237 | 450 | 0.16 | - |
| 0.0263 | 500 | 0.117 | - |
| 0.0289 | 550 | 0.1685 | - |
| 0.0315 | 600 | 0.0348 | - |
| 0.0342 | 650 | 0.0912 | - |
| 0.0368 | 700 | 0.0217 | - |
| 0.0394 | 750 | 0.0417 | - |
| 0.0421 | 800 | 0.0592 | - |
| 0.0447 | 850 | 0.047 | - |
| 0.0473 | 900 | 0.0914 | - |
| 0.0499 | 950 | 0.0116 | - |
| 0.0526 | 1000 | 0.022 | - |
| 0.0552 | 1050 | 0.0018 | - |
| 0.0578 | 1100 | 0.0159 | - |
| 0.0605 | 1150 | 0.0097 | - |
| 0.0631 | 1200 | 0.066 | - |
| 0.0657 | 1250 | 0.0027 | - |
| 0.0683 | 1300 | 0.003 | - |
| 0.0710 | 1350 | 0.0146 | - |
| 0.0736 | 1400 | 0.009 | - |
| 0.0762 | 1450 | 0.0016 | - |
| 0.0789 | 1500 | 0.001 | - |
| 0.0815 | 1550 | 0.019 | - |
| 0.0841 | 1600 | 0.0015 | - |
| 0.0867 | 1650 | 0.0003 | - |
| 0.0894 | 1700 | 0.0929 | - |
| 0.0920 | 1750 | 0.013 | - |
| 0.0946 | 1800 | 0.0007 | - |
| 0.0973 | 1850 | 0.0413 | - |
| 0.0999 | 1900 | 0.0922 | - |
| 0.1025 | 1950 | 0.0009 | - |
| 0.1051 | 2000 | 0.001 | - |
| 0.1078 | 2050 | 0.0007 | - |
| 0.1104 | 2100 | 0.0086 | - |
| 0.1130 | 2150 | 0.0017 | - |
| 0.1157 | 2200 | 0.0048 | - |
| 0.1183 | 2250 | 0.0002 | - |
| 0.1209 | 2300 | 0.0518 | - |
| 0.1235 | 2350 | 0.0271 | - |
| 0.1262 | 2400 | 0.0138 | - |
| 0.1288 | 2450 | 0.0136 | - |
| 0.1314 | 2500 | 0.0444 | - |
| 0.1341 | 2550 | 0.0096 | - |
| 0.1367 | 2600 | 0.0064 | - |
| 0.1393 | 2650 | 0.0092 | - |
| 0.1419 | 2700 | 0.0012 | - |
| 0.1446 | 2750 | 0.0044 | - |
| 0.1472 | 2800 | 0.0121 | - |
| 0.1498 | 2850 | 0.0004 | - |
| 0.1525 | 2900 | 0.0002 | - |
| 0.1551 | 2950 | 0.0008 | - |
| 0.1577 | 3000 | 0.0034 | - |
| 0.1603 | 3050 | 0.0002 | - |
| 0.1630 | 3100 | 0.0152 | - |
| 0.1656 | 3150 | 0.0195 | - |
| 0.1682 | 3200 | 0.0005 | - |
| 0.1709 | 3250 | 0.0002 | - |
| 0.1735 | 3300 | 0.0343 | - |
| 0.1761 | 3350 | 0.0095 | - |
| 0.1787 | 3400 | 0.0354 | - |
| 0.1814 | 3450 | 0.0085 | - |
| 0.1840 | 3500 | 0.001 | - |
| 0.1866 | 3550 | 0.0194 | - |
| 0.1893 | 3600 | 0.017 | - |
| 0.1919 | 3650 | 0.0003 | - |
| 0.1945 | 3700 | 0.0024 | - |
| 0.1972 | 3750 | 0.06 | - |
| 0.1998 | 3800 | 0.0006 | - |
| 0.2024 | 3850 | 0.0003 | - |
| 0.2050 | 3900 | 0.0311 | - |
| 0.2077 | 3950 | 0.023 | - |
| 0.2103 | 4000 | 0.0039 | - |
| 0.2129 | 4050 | 0.0085 | - |
| 0.2156 | 4100 | 0.0036 | - |
| 0.2182 | 4150 | 0.0015 | - |
| 0.2208 | 4200 | 0.0584 | - |
| 0.2234 | 4250 | 0.0004 | - |
| 0.2261 | 4300 | 0.0082 | - |
| 0.2287 | 4350 | 0.0001 | - |
| 0.2313 | 4400 | 0.0044 | - |
| 0.2340 | 4450 | 0.0003 | - |
| 0.2366 | 4500 | 0.0495 | - |
| 0.2392 | 4550 | 0.0073 | - |
| 0.2418 | 4600 | 0.0152 | - |
| 0.2445 | 4650 | 0.0033 | - |
| 0.2471 | 4700 | 0.0005 | - |
| 0.2497 | 4750 | 0.0102 | - |
| 0.2524 | 4800 | 0.046 | - |
| 0.2550 | 4850 | 0.0028 | - |
| 0.2576 | 4900 | 0.0014 | - |
| 0.2602 | 4950 | 0.0118 | - |
| 0.2629 | 5000 | 0.0042 | - |
| 0.2655 | 5050 | 0.0005 | - |
| 0.2681 | 5100 | 0.0031 | - |
| 0.2708 | 5150 | 0.0002 | - |
| 0.2734 | 5200 | 0.002 | - |
| 0.2760 | 5250 | 0.0111 | - |
| 0.2786 | 5300 | 0.0286 | - |
| 0.2813 | 5350 | 0.0009 | - |
| 0.2839 | 5400 | 0.0023 | - |
| 0.2865 | 5450 | 0.0079 | - |
| 0.2892 | 5500 | 0.0691 | - |
| 0.2918 | 5550 | 0.0403 | - |
| 0.2944 | 5600 | 0.0002 | - |
| 0.2970 | 5650 | 0.0057 | - |
| 0.2997 | 5700 | 0.0047 | - |
| 0.3023 | 5750 | 0.0322 | - |
| 0.3049 | 5800 | 0.0097 | - |
| 0.3076 | 5850 | 0.0012 | - |
| 0.3102 | 5900 | 0.0047 | - |
| 0.3128 | 5950 | 0.0925 | - |
| 0.3154 | 6000 | 0.0562 | - |
| 0.3181 | 6050 | 0.0058 | - |
| 0.3207 | 6100 | 0.0001 | - |
| 0.3233 | 6150 | 0.0029 | - |
| 0.3260 | 6200 | 0.0001 | - |
| 0.3286 | 6250 | 0.0035 | - |
| 0.3312 | 6300 | 0.0013 | - |
| 0.3338 | 6350 | 0.0152 | - |
| 0.3365 | 6400 | 0.0004 | - |
| 0.3391 | 6450 | 0.0114 | - |
| 0.3417 | 6500 | 0.0906 | - |
| 0.3444 | 6550 | 0.0005 | - |
| 0.3470 | 6600 | 0.0028 | - |
| 0.3496 | 6650 | 0.0395 | - |
| 0.3522 | 6700 | 0.0001 | - |
| 0.3549 | 6750 | 0.0044 | - |
| 0.3575 | 6800 | 0.0121 | - |
| 0.3601 | 6850 | 0.0012 | - |
| 0.3628 | 6900 | 0.0193 | - |
| 0.3654 | 6950 | 0.0014 | - |
| 0.3680 | 7000 | 0.0001 | - |
| 0.3706 | 7050 | 0.0618 | - |
| 0.3733 | 7100 | 0.0066 | - |
| 0.3759 | 7150 | 0.0426 | - |
| 0.3785 | 7200 | 0.0281 | - |
| 0.3812 | 7250 | 0.0254 | - |
| 0.3838 | 7300 | 0.0008 | - |
| 0.3864 | 7350 | 0.0047 | - |
| 0.3890 | 7400 | 0.0088 | - |
| 0.3917 | 7450 | 0.0004 | - |
| 0.3943 | 7500 | 0.0054 | - |
| 0.3969 | 7550 | 0.0371 | - |
| 0.3996 | 7600 | 0.0001 | - |
| 0.4022 | 7650 | 0.0082 | - |
| 0.4048 | 7700 | 0.0162 | - |
| 0.4074 | 7750 | 0.0093 | - |
| 0.4101 | 7800 | 0.0115 | - |
| 0.4127 | 7850 | 0.0114 | - |
| 0.4153 | 7900 | 0.0001 | - |
| 0.4180 | 7950 | 0.0002 | - |
| 0.4206 | 8000 | 0.0098 | - |
| 0.4232 | 8050 | 0.0001 | - |
| 0.4258 | 8100 | 0.0 | - |
| 0.4285 | 8150 | 0.0104 | - |
| 0.4311 | 8200 | 0.0564 | - |
| 0.4337 | 8250 | 0.0002 | - |
| 0.4364 | 8300 | 0.0176 | - |
| 0.4390 | 8350 | 0.0109 | - |
| 0.4416 | 8400 | 0.0001 | - |
| 0.4442 | 8450 | 0.0053 | - |
| 0.4469 | 8500 | 0.0629 | - |
| 0.4495 | 8550 | 0.0324 | - |
| 0.4521 | 8600 | 0.0003 | - |
| 0.4548 | 8650 | 0.0025 | - |
| 0.4574 | 8700 | 0.0032 | - |
| 0.4600 | 8750 | 0.0002 | - |
| 0.4626 | 8800 | 0.0001 | - |
| 0.4653 | 8850 | 0.0475 | - |
| 0.4679 | 8900 | 0.0114 | - |
| 0.4705 | 8950 | 0.0001 | - |
| 0.4732 | 9000 | 0.0028 | - |
| 0.4758 | 9050 | 0.0001 | - |
| 0.4784 | 9100 | 0.0002 | - |
| 0.4810 | 9150 | 0.0001 | - |
| 0.4837 | 9200 | 0.0001 | - |
| 0.4863 | 9250 | 0.0021 | - |
| 0.4889 | 9300 | 0.0001 | - |
| 0.4916 | 9350 | 0.0014 | - |
| 0.4942 | 9400 | 0.0176 | - |
| 0.4968 | 9450 | 0.0005 | - |
| 0.4994 | 9500 | 0.0001 | - |
| 0.5021 | 9550 | 0.0314 | - |
| 0.5047 | 9600 | 0.0613 | - |
| 0.5073 | 9650 | 0.018 | - |
| 0.5100 | 9700 | 0.0 | - |
| 0.5126 | 9750 | 0.0023 | - |
| 0.5152 | 9800 | 0.0013 | - |
| 0.5178 | 9850 | 0.0001 | - |
| 0.5205 | 9900 | 0.0003 | - |
| 0.5231 | 9950 | 0.001 | - |
| 0.5257 | 10000 | 0.0001 | - |
| 0.5284 | 10050 | 0.0193 | - |
| 0.5310 | 10100 | 0.0051 | - |
| 0.5336 | 10150 | 0.0001 | - |
| 0.5362 | 10200 | 0.0005 | - |
| 0.5389 | 10250 | 0.0 | - |
| 0.5415 | 10300 | 0.0001 | - |
| 0.5441 | 10350 | 0.0001 | - |
| 0.5468 | 10400 | 0.0037 | - |
| 0.5494 | 10450 | 0.0309 | - |
| 0.5520 | 10500 | 0.0286 | - |
| 0.5547 | 10550 | 0.0 | - |
| 0.5573 | 10600 | 0.0155 | - |
| 0.5599 | 10650 | 0.0001 | - |
| 0.5625 | 10700 | 0.0077 | - |
| 0.5652 | 10750 | 0.0153 | - |
| 0.5678 | 10800 | 0.0042 | - |
| 0.5704 | 10850 | 0.0103 | - |
| 0.5731 | 10900 | 0.0097 | - |
| 0.5757 | 10950 | 0.0109 | - |
| 0.5783 | 11000 | 0.0001 | - |
| 0.5809 | 11050 | 0.0103 | - |
| 0.5836 | 11100 | 0.0024 | - |
| 0.5862 | 11150 | 0.0001 | - |
| 0.5888 | 11200 | 0.0487 | - |
| 0.5915 | 11250 | 0.0009 | - |
| 0.5941 | 11300 | 0.0001 | - |
| 0.5967 | 11350 | 0.0002 | - |
| 0.5993 | 11400 | 0.0035 | - |
| 0.6020 | 11450 | 0.0005 | - |
| 0.6046 | 11500 | 0.0001 | - |
| 0.6072 | 11550 | 0.0049 | - |
| 0.6099 | 11600 | 0.0396 | - |
| 0.6125 | 11650 | 0.0177 | - |
| 0.6151 | 11700 | 0.0071 | - |
| 0.6177 | 11750 | 0.0071 | - |
| 0.6204 | 11800 | 0.0111 | - |
| 0.6230 | 11850 | 0.0145 | - |
| 0.6256 | 11900 | 0.037 | - |
| 0.6283 | 11950 | 0.0046 | - |
| 0.6309 | 12000 | 0.0258 | - |
| 0.6335 | 12050 | 0.0002 | - |
| 0.6361 | 12100 | 0.002 | - |
| 0.6388 | 12150 | 0.0119 | - |
| 0.6414 | 12200 | 0.0079 | - |
| 0.6440 | 12250 | 0.0239 | - |
| 0.6467 | 12300 | 0.0037 | - |
| 0.6493 | 12350 | 0.0366 | - |
| 0.6519 | 12400 | 0.0201 | - |
| 0.6545 | 12450 | 0.002 | - |
| 0.6572 | 12500 | 0.0652 | - |
| 0.6598 | 12550 | 0.005 | - |
| 0.6624 | 12600 | 0.0034 | - |
| 0.6651 | 12650 | 0.0003 | - |
| 0.6677 | 12700 | 0.0022 | - |
| 0.6703 | 12750 | 0.0001 | - |
| 0.6729 | 12800 | 0.0175 | - |
| 0.6756 | 12850 | 0.0003 | - |
| 0.6782 | 12900 | 0.0085 | - |
| 0.6808 | 12950 | 0.0036 | - |
| 0.6835 | 13000 | 0.0 | - |
| 0.6861 | 13050 | 0.0097 | - |
| 0.6887 | 13100 | 0.006 | - |
| 0.6913 | 13150 | 0.0001 | - |
| 0.6940 | 13200 | 0.0001 | - |
| 0.6966 | 13250 | 0.0379 | - |
| 0.6992 | 13300 | 0.0076 | - |
| 0.7019 | 13350 | 0.0627 | - |
| 0.7045 | 13400 | 0.0605 | - |
| 0.7071 | 13450 | 0.0081 | - |
| 0.7097 | 13500 | 0.0018 | - |
| 0.7124 | 13550 | 0.018 | - |
| 0.7150 | 13600 | 0.0035 | - |
| 0.7176 | 13650 | 0.0001 | - |
| 0.7203 | 13700 | 0.0001 | - |
| 0.7229 | 13750 | 0.0507 | - |
| 0.7255 | 13800 | 0.0082 | - |
| 0.7281 | 13850 | 0.0082 | - |
| 0.7308 | 13900 | 0.0106 | - |
| 0.7334 | 13950 | 0.0067 | - |
| 0.7360 | 14000 | 0.0062 | - |
| 0.7387 | 14050 | 0.0001 | - |
| 0.7413 | 14100 | 0.0246 | - |
| 0.7439 | 14150 | 0.0033 | - |
| 0.7465 | 14200 | 0.0001 | - |
| 0.7492 | 14250 | 0.0432 | - |
| 0.7518 | 14300 | 0.0502 | - |
| 0.7544 | 14350 | 0.0079 | - |
| 0.7571 | 14400 | 0.0291 | - |
| 0.7597 | 14450 | 0.0002 | - |
| 0.7623 | 14500 | 0.0029 | - |
| 0.7649 | 14550 | 0.0321 | - |
| 0.7676 | 14600 | 0.0002 | - |
| 0.7702 | 14650 | 0.0053 | - |
| 0.7728 | 14700 | 0.0094 | - |
| 0.7755 | 14750 | 0.0156 | - |
| 0.7781 | 14800 | 0.071 | - |
| 0.7807 | 14850 | 0.0001 | - |
| 0.7833 | 14900 | 0.0037 | - |
| 0.7860 | 14950 | 0.0544 | - |
| 0.7886 | 15000 | 0.0034 | - |
| 0.7912 | 15050 | 0.0018 | - |
| 0.7939 | 15100 | 0.0014 | - |
| 0.7965 | 15150 | 0.0189 | - |
| 0.7991 | 15200 | 0.0001 | - |
| 0.8017 | 15250 | 0.0057 | - |
| 0.8044 | 15300 | 0.0001 | - |
| 0.8070 | 15350 | 0.0002 | - |
| 0.8096 | 15400 | 0.0003 | - |
| 0.8123 | 15450 | 0.0006 | - |
| 0.8149 | 15500 | 0.1085 | - |
| 0.8175 | 15550 | 0.0003 | - |
| 0.8201 | 15600 | 0.0001 | - |
| 0.8228 | 15650 | 0.0005 | - |
| 0.8254 | 15700 | 0.014 | - |
| 0.8280 | 15750 | 0.0036 | - |
| 0.8307 | 15800 | 0.0001 | - |
| 0.8333 | 15850 | 0.0 | - |
| 0.8359 | 15900 | 0.0 | - |
| 0.8385 | 15950 | 0.0001 | - |
| 0.8412 | 16000 | 0.0001 | - |
| 0.8438 | 16050 | 0.0271 | - |
| 0.8464 | 16100 | 0.0093 | - |
| 0.8491 | 16150 | 0.0444 | - |
| 0.8517 | 16200 | 0.0002 | - |
| 0.8543 | 16250 | 0.0007 | - |
| 0.8569 | 16300 | 0.0002 | - |
| 0.8596 | 16350 | 0.0012 | - |
| 0.8622 | 16400 | 0.0 | - |
| 0.8648 | 16450 | 0.0177 | - |
| 0.8675 | 16500 | 0.0342 | - |
| 0.8701 | 16550 | 0.0288 | - |
| 0.8727 | 16600 | 0.0 | - |
| 0.8753 | 16650 | 0.0024 | - |
| 0.8780 | 16700 | 0.0003 | - |
| 0.8806 | 16750 | 0.0063 | - |
| 0.8832 | 16800 | 0.0442 | - |
| 0.8859 | 16850 | 0.0092 | - |
| 0.8885 | 16900 | 0.0089 | - |
| 0.8911 | 16950 | 0.0027 | - |
| 0.8937 | 17000 | 0.0521 | - |
| 0.8964 | 17050 | 0.0023 | - |
| 0.8990 | 17100 | 0.051 | - |
| 0.9016 | 17150 | 0.0015 | - |
| 0.9043 | 17200 | 0.0003 | - |
| 0.9069 | 17250 | 0.0177 | - |
| 0.9095 | 17300 | 0.0031 | - |
| 0.9121 | 17350 | 0.0205 | - |
| 0.9148 | 17400 | 0.0172 | - |
| 0.9174 | 17450 | 0.0001 | - |
| 0.9200 | 17500 | 0.005 | - |
| 0.9227 | 17550 | 0.0409 | - |
| 0.9253 | 17600 | 0.0001 | - |
| 0.9279 | 17650 | 0.0 | - |
| 0.9306 | 17700 | 0.0002 | - |
| 0.9332 | 17750 | 0.0274 | - |
| 0.9358 | 17800 | 0.0077 | - |
| 0.9384 | 17850 | 0.0078 | - |
| 0.9411 | 17900 | 0.0001 | - |
| 0.9437 | 17950 | 0.0 | - |
| 0.9463 | 18000 | 0.0437 | - |
| 0.9490 | 18050 | 0.0143 | - |
| 0.9516 | 18100 | 0.001 | - |
| 0.9542 | 18150 | 0.0001 | - |
| 0.9568 | 18200 | 0.0428 | - |
| 0.9595 | 18250 | 0.0036 | - |
| 0.9621 | 18300 | 0.0001 | - |
| 0.9647 | 18350 | 0.0001 | - |
| 0.9674 | 18400 | 0.0063 | - |
| 0.9700 | 18450 | 0.0 | - |
| 0.9726 | 18500 | 0.0196 | - |
| 0.9752 | 18550 | 0.0001 | - |
| 0.9779 | 18600 | 0.0001 | - |
| 0.9805 | 18650 | 0.0001 | - |
| 0.9831 | 18700 | 0.0397 | - |
| 0.9858 | 18750 | 0.008 | - |
| 0.9884 | 18800 | 0.015 | - |
| 0.9910 | 18850 | 0.0 | - |
| 0.9936 | 18900 | 0.003 | - |
| 0.9963 | 18950 | 0.025 | - |
| 0.9989 | 19000 | 0.003 | - |
| **1.0** | **19021** | **-** | **0.2343** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.4.0
- Transformers: 4.38.1
- PyTorch: 2.1.0+cu118
- Datasets: 2.17.1
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "sentence-transformers/paraphrase-multilingual-mpnet-base-v2", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "loan repayment "}, {"text": "2023-F48"}, {"text": "acompte "}, {"text": "2023-12-1165548"}, {"text": "Facture 20230040"}], "inference": true, "model-index": [{"name": "SetFit with sentence-transformers/paraphrase-multilingual-mpnet-base-v2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.73568281938326, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,252 |
myeolinmalchi/Gugugo-koen-1.3B-V1.0-GGML
|
myeolinmalchi
|
translation
|
[
"translation",
"en",
"ko",
"license:apache-2.0",
"region:us"
] | 2023-08-15T12:27:42Z |
2023-08-15T12:37:42+00:00
| 0 | 1 |
---
language:
- en
- ko
license: apache-2.0
pipeline_tag: translation
---
original model: [Gugugo-koen-1.3B-V1.0](https://huggingface.co/squarelike/Gugugo-koen-1.3B-V1.0)
Prompt Template:
```
### 한국어: {sentence}</끝>
### 영어:
```
```
### 영어: {sentence}</끝>
### 한국어:
```
| null |
Non_BioNLP
|
original model: [Gugugo-koen-1.3B-V1.0](https://huggingface.co/squarelike/Gugugo-koen-1.3B-V1.0)
Prompt Template:
```
### 한국어: {sentence}</끝>
### 영어:
```
```
### 영어: {sentence}</끝>
### 한국어:
```
|
{"language": ["en", "ko"], "license": "apache-2.0", "pipeline_tag": "translation"}
|
task
|
[
"TRANSLATION"
] | 42,253 |
TransferGraph/cardiffnlp_twitter-roberta-base-2021-124m-finetuned-lora-tweet_eval_emotion
|
TransferGraph
|
text-classification
|
[
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:cardiffnlp/twitter-roberta-base-2021-124m",
"base_model:adapter:cardiffnlp/twitter-roberta-base-2021-124m",
"license:mit",
"model-index",
"region:us"
] | 2024-02-29T12:50:46Z |
2024-02-29T12:50:50+00:00
| 0 | 0 |
---
base_model: cardiffnlp/twitter-roberta-base-2021-124m
datasets:
- tweet_eval
library_name: peft
license: mit
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: cardiffnlp_twitter-roberta-base-2021-124m-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.7941176470588235
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cardiffnlp_twitter-roberta-base-2021-124m-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-2021-124m](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7941
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2594 | None | 0 |
| 0.7273 | 0.9137 | 0 |
| 0.7861 | 0.6091 | 1 |
| 0.7914 | 0.5396 | 2 |
| 0.7941 | 0.5120 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cardiffnlp_twitter-roberta-base-2021-124m-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-2021-124m](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7941
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2594 | None | 0 |
| 0.7273 | 0.9137 | 0 |
| 0.7861 | 0.6091 | 1 |
| 0.7914 | 0.5396 | 2 |
| 0.7941 | 0.5120 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
|
{"base_model": "cardiffnlp/twitter-roberta-base-2021-124m", "datasets": ["tweet_eval"], "library_name": "peft", "license": "mit", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "cardiffnlp_twitter-roberta-base-2021-124m-finetuned-lora-tweet_eval_emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "config": "emotion", "split": "validation", "args": "emotion"}, "metrics": [{"type": "accuracy", "value": 0.7941176470588235, "name": "accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,254 |
LequeuISIR/ModernBERT-base-DPR-8e-05
|
LequeuISIR
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"modernbert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:478146",
"loss:CoSENTLoss",
"arxiv:1908.10084",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2025-01-28T18:14:53Z |
2025-01-28T18:15:15+00:00
| 5 | 0 |
---
base_model: answerdotai/ModernBERT-base
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:478146
- loss:CoSENTLoss
widget:
- source_sentence: However, its underutilization is mainly due to the absence of a
concrete and coherent dissemination strategy.
sentences:
- At the same time, they need to understand that living in Europe brings great responsibilities
in addition to great benefits.
- 'The mainstay of any intelligent and patriotic mineral policy can be summed up
in the following postulate: "since minerals are exhaustible, they should only
be exploited with the maximum return for the economy of the country where they
are mined".'
- We must move quickly to a shared sustainable energy supply, sustainable transportation
and clean air.
- source_sentence: Their track record shows they do not support Australia<92>s traditional
industries because they are constantly pandering to the Greens.
sentences:
- An economic dynamic based on the sustainable development of national potential,
equitable access to the means of production, social justice, environmental conservation,
the incorporation of added value, the promotion of competitiveness and self-management,
- the cry "El campo no aguanta más" (The countryside can't take it anymore), of
the peasant movement and its proclamation of "Salvemos al Campo para salvar a
México" (Let's save the countryside to save Mexico);
- On the other hand, increasing defence capacity is directly related to the involvement
of all citizens in appropriate programmes, which, together with the acquisition
of skills, experience and organisation, also contribute to forging a spirit of
militancy and collectivity.
- source_sentence: We will prepare the proposals of the United Nations Declaration
on the Rights of the Child in line with the commitments made.
sentences:
- For the presentation of Czech culture, we will also use the upcoming major anniversaries
(100 years of the founding of Czechoslovakia, the 30th anniversary of the canonization
of Agnes of Bohemia, 600 years since the birth of George of Poděbrady, etc.).
- Separate prison units for young people should be established, and special rehabilitation
measures should be introduced in these units.
- Austrian citizenship is a valuable asset and should not become accessible to those
who do not abide by the laws of our state.
- source_sentence: Third, CD&V wants to strengthen the social sustainability of our
agriculture and horticulture sector.
sentences:
- We will take a farm-level approach where possible so that low-emissions farmers
are rewarded with a lower cost through the ETS, rather than the current approach
that assumes each cow, for instance, has the same emissions on every farm.
- In addition, 20 billion euros in tax revenues are fraudulently evaded every year
(the equivalent of the healthcare budget).
- 87 percent of arrested undocumented migrants are released sooner or later, but
without papers, in a lawless situation.
- source_sentence: This incites social hatred, threatens economic and social stability,
and undermines trust in the authorities.
sentences:
- ' The conditions for a healthy entrepreneurship, where the most innovative and
creative win and where the source of enrichment cannot be property speculation
or guilds and networks. '
- According to statistics from the Attorney General's Office, since February 2005,
when the implementation of the PSD was announced, the rate of violent deaths per
100,000 inhabitants has dropped from 26.41 in December 2005 to 18.43 in December
2007.
- As a result, the profits of the oligarchs are more than 400 times what our entire
country gets from the exploitation of natural resources.
---
# SentenceTransformer based on answerdotai/ModernBERT-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) <!-- at revision 8949b909ec900327062f0ebf497f51aef5e6f0c8 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("LequeuISIR/ModernBERT-base-DPR-8e-05")
# Run inference
sentences = [
'This incites social hatred, threatens economic and social stability, and undermines trust in the authorities.',
'\xa0The conditions for a healthy entrepreneurship, where the most innovative and creative win and where the source of enrichment cannot be property speculation or guilds and networks. ',
'As a result, the profits of the oligarchs are more than 400 times what our entire country gets from the exploitation of natural resources.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 478,146 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 17 tokens</li><li>mean: 33.73 tokens</li><li>max: 107 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 33.84 tokens</li><li>max: 101 tokens</li></ul> | <ul><li>0: ~57.50%</li><li>1: ~4.10%</li><li>2: ~38.40%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>There have also been other important structural changes in the countryside, which have come together to form this new, as yet unknown, country.</code> | <code>Meanwhile, investment, which is the way to increase production, employment capacity and competitiveness of the economy, fell from 20% of output in 1974 to only 11.8% on average between 1984 and 1988.</code> | <code>0</code> |
| <code>Introduce new visa categories so we can be responsive to humanitarian needs and incentivise greater investment in our domestic infrastructure and regional economies</code> | <code>The purpose of the project is to design and implement public policies aimed at achieving greater and faster inclusion of immigrants.</code> | <code>2</code> |
| <code>and economic crimes that seriously and generally affect the fundamental rights of individuals and the international community as a whole.</code> | <code>For the first time in the history, not only of Ecuador, but of the entire world, a government promoted a public audit process of the foreign debt and declared some of its tranches illegitimate and immoral.</code> | <code>0</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
### Evaluation Dataset
#### json
* Dataset: json
* Size: 478,146 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 17 tokens</li><li>mean: 33.62 tokens</li><li>max: 103 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 34.48 tokens</li><li>max: 111 tokens</li></ul> | <ul><li>0: ~57.30%</li><li>1: ~2.90%</li><li>2: ~39.80%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>The anchoring of the Slovak Republic in the European Union allows citizens to feel: secure politically, secure economically, secure socially.</code> | <code>Radikale Venstre wants Denmark to participate fully and firmly in EU cooperation on immigration, asylum and cross-border crime.</code> | <code>2</code> |
| <code>Portugal's participation in the Community's negotiation of the next financial perspective should also be geared in the same direction.</code> | <code>Given the dynamic international framework, safeguarding the national interest requires adjustments to each of these vectors.</code> | <code>2</code> |
| <code>On asylum, the Green Party will: Dismantle the direct provision system and replace it with an efficient and humane system for determining the status of asylum seekers</code> | <code>The crisis in the coal sector subsequently forced these immigrant workers to move into other economic sectors such as metallurgy, chemicals, construction and transport.</code> | <code>2</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `learning_rate`: 8e-05
- `num_train_epochs`: 5
- `warmup_ratio`: 0.05
- `bf16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 8e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.05
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:-----:|:-------------:|
| 0.0837 | 500 | 6.425 |
| 0.1673 | 1000 | 6.0308 |
| 0.2510 | 1500 | 5.9522 |
| 0.3346 | 2000 | 5.7818 |
| 0.4183 | 2500 | 5.7122 |
| 0.5019 | 3000 | 5.6378 |
| 0.5856 | 3500 | 5.5503 |
| 0.6692 | 4000 | 5.4429 |
| 0.7529 | 4500 | 5.4246 |
| 0.8365 | 5000 | 5.3536 |
| 0.9202 | 5500 | 5.4072 |
| 1.0038 | 6000 | 5.3033 |
| 1.0875 | 6500 | 4.7611 |
| 1.1712 | 7000 | 4.7535 |
| 1.2548 | 7500 | 4.7503 |
| 1.3385 | 8000 | 4.7453 |
| 1.4221 | 8500 | 4.7413 |
| 1.5058 | 9000 | 4.6753 |
| 1.5894 | 9500 | 4.67 |
| 1.6731 | 10000 | 4.7352 |
| 1.7567 | 10500 | 4.7164 |
| 1.8404 | 11000 | 4.6784 |
| 1.9240 | 11500 | 4.651 |
| 2.0077 | 12000 | 4.5708 |
| 2.0914 | 12500 | 3.6274 |
| 2.1750 | 13000 | 3.5683 |
| 2.2587 | 13500 | 3.7028 |
| 2.3423 | 14000 | 3.5859 |
| 2.4260 | 14500 | 3.6872 |
| 2.5096 | 15000 | 3.5148 |
| 2.5933 | 15500 | 3.7241 |
| 2.6769 | 16000 | 3.5983 |
| 2.7606 | 16500 | 3.6269 |
| 2.8442 | 17000 | 3.6078 |
| 2.9279 | 17500 | 3.6292 |
| 3.0115 | 18000 | 3.5151 |
| 3.0952 | 18500 | 2.5933 |
| 3.1789 | 19000 | 2.599 |
| 3.2625 | 19500 | 2.5598 |
| 3.3462 | 20000 | 2.5577 |
| 3.4298 | 20500 | 2.5827 |
| 3.5135 | 21000 | 2.5598 |
| 3.5971 | 21500 | 2.4173 |
| 3.6808 | 22000 | 2.5884 |
| 3.7644 | 22500 | 2.4313 |
| 3.8481 | 23000 | 2.5669 |
| 3.9317 | 23500 | 2.5162 |
| 4.0154 | 24000 | 2.2531 |
| 4.0990 | 24500 | 1.3758 |
| 4.1827 | 25000 | 1.5491 |
| 4.2664 | 25500 | 1.4933 |
| 4.3500 | 26000 | 1.5139 |
| 4.4337 | 26500 | 1.4607 |
| 4.5173 | 27000 | 1.6117 |
| 4.6010 | 27500 | 1.5395 |
| 4.6846 | 28000 | 1.493 |
| 4.7683 | 28500 | 1.3984 |
| 4.8519 | 29000 | 1.4183 |
| 4.9356 | 29500 | 1.3517 |
### Framework Versions
- Python: 3.9.21
- Sentence Transformers: 3.4.0
- Transformers: 4.48.1
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CoSENTLoss
```bibtex
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on answerdotai/ModernBERT-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) <!-- at revision 8949b909ec900327062f0ebf497f51aef5e6f0c8 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("LequeuISIR/ModernBERT-base-DPR-8e-05")
# Run inference
sentences = [
'This incites social hatred, threatens economic and social stability, and undermines trust in the authorities.',
'\xa0The conditions for a healthy entrepreneurship, where the most innovative and creative win and where the source of enrichment cannot be property speculation or guilds and networks. ',
'As a result, the profits of the oligarchs are more than 400 times what our entire country gets from the exploitation of natural resources.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 478,146 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 17 tokens</li><li>mean: 33.73 tokens</li><li>max: 107 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 33.84 tokens</li><li>max: 101 tokens</li></ul> | <ul><li>0: ~57.50%</li><li>1: ~4.10%</li><li>2: ~38.40%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>There have also been other important structural changes in the countryside, which have come together to form this new, as yet unknown, country.</code> | <code>Meanwhile, investment, which is the way to increase production, employment capacity and competitiveness of the economy, fell from 20% of output in 1974 to only 11.8% on average between 1984 and 1988.</code> | <code>0</code> |
| <code>Introduce new visa categories so we can be responsive to humanitarian needs and incentivise greater investment in our domestic infrastructure and regional economies</code> | <code>The purpose of the project is to design and implement public policies aimed at achieving greater and faster inclusion of immigrants.</code> | <code>2</code> |
| <code>and economic crimes that seriously and generally affect the fundamental rights of individuals and the international community as a whole.</code> | <code>For the first time in the history, not only of Ecuador, but of the entire world, a government promoted a public audit process of the foreign debt and declared some of its tranches illegitimate and immoral.</code> | <code>0</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
### Evaluation Dataset
#### json
* Dataset: json
* Size: 478,146 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 17 tokens</li><li>mean: 33.62 tokens</li><li>max: 103 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 34.48 tokens</li><li>max: 111 tokens</li></ul> | <ul><li>0: ~57.30%</li><li>1: ~2.90%</li><li>2: ~39.80%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>The anchoring of the Slovak Republic in the European Union allows citizens to feel: secure politically, secure economically, secure socially.</code> | <code>Radikale Venstre wants Denmark to participate fully and firmly in EU cooperation on immigration, asylum and cross-border crime.</code> | <code>2</code> |
| <code>Portugal's participation in the Community's negotiation of the next financial perspective should also be geared in the same direction.</code> | <code>Given the dynamic international framework, safeguarding the national interest requires adjustments to each of these vectors.</code> | <code>2</code> |
| <code>On asylum, the Green Party will: Dismantle the direct provision system and replace it with an efficient and humane system for determining the status of asylum seekers</code> | <code>The crisis in the coal sector subsequently forced these immigrant workers to move into other economic sectors such as metallurgy, chemicals, construction and transport.</code> | <code>2</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `learning_rate`: 8e-05
- `num_train_epochs`: 5
- `warmup_ratio`: 0.05
- `bf16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 8e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.05
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:-----:|:-------------:|
| 0.0837 | 500 | 6.425 |
| 0.1673 | 1000 | 6.0308 |
| 0.2510 | 1500 | 5.9522 |
| 0.3346 | 2000 | 5.7818 |
| 0.4183 | 2500 | 5.7122 |
| 0.5019 | 3000 | 5.6378 |
| 0.5856 | 3500 | 5.5503 |
| 0.6692 | 4000 | 5.4429 |
| 0.7529 | 4500 | 5.4246 |
| 0.8365 | 5000 | 5.3536 |
| 0.9202 | 5500 | 5.4072 |
| 1.0038 | 6000 | 5.3033 |
| 1.0875 | 6500 | 4.7611 |
| 1.1712 | 7000 | 4.7535 |
| 1.2548 | 7500 | 4.7503 |
| 1.3385 | 8000 | 4.7453 |
| 1.4221 | 8500 | 4.7413 |
| 1.5058 | 9000 | 4.6753 |
| 1.5894 | 9500 | 4.67 |
| 1.6731 | 10000 | 4.7352 |
| 1.7567 | 10500 | 4.7164 |
| 1.8404 | 11000 | 4.6784 |
| 1.9240 | 11500 | 4.651 |
| 2.0077 | 12000 | 4.5708 |
| 2.0914 | 12500 | 3.6274 |
| 2.1750 | 13000 | 3.5683 |
| 2.2587 | 13500 | 3.7028 |
| 2.3423 | 14000 | 3.5859 |
| 2.4260 | 14500 | 3.6872 |
| 2.5096 | 15000 | 3.5148 |
| 2.5933 | 15500 | 3.7241 |
| 2.6769 | 16000 | 3.5983 |
| 2.7606 | 16500 | 3.6269 |
| 2.8442 | 17000 | 3.6078 |
| 2.9279 | 17500 | 3.6292 |
| 3.0115 | 18000 | 3.5151 |
| 3.0952 | 18500 | 2.5933 |
| 3.1789 | 19000 | 2.599 |
| 3.2625 | 19500 | 2.5598 |
| 3.3462 | 20000 | 2.5577 |
| 3.4298 | 20500 | 2.5827 |
| 3.5135 | 21000 | 2.5598 |
| 3.5971 | 21500 | 2.4173 |
| 3.6808 | 22000 | 2.5884 |
| 3.7644 | 22500 | 2.4313 |
| 3.8481 | 23000 | 2.5669 |
| 3.9317 | 23500 | 2.5162 |
| 4.0154 | 24000 | 2.2531 |
| 4.0990 | 24500 | 1.3758 |
| 4.1827 | 25000 | 1.5491 |
| 4.2664 | 25500 | 1.4933 |
| 4.3500 | 26000 | 1.5139 |
| 4.4337 | 26500 | 1.4607 |
| 4.5173 | 27000 | 1.6117 |
| 4.6010 | 27500 | 1.5395 |
| 4.6846 | 28000 | 1.493 |
| 4.7683 | 28500 | 1.3984 |
| 4.8519 | 29000 | 1.4183 |
| 4.9356 | 29500 | 1.3517 |
### Framework Versions
- Python: 3.9.21
- Sentence Transformers: 3.4.0
- Transformers: 4.48.1
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CoSENTLoss
```bibtex
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "answerdotai/ModernBERT-base", "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:478146", "loss:CoSENTLoss"], "widget": [{"source_sentence": "However, its underutilization is mainly due to the absence of a concrete and coherent dissemination strategy.", "sentences": ["At the same time, they need to understand that living in Europe brings great responsibilities in addition to great benefits.", "The mainstay of any intelligent and patriotic mineral policy can be summed up in the following postulate: \"since minerals are exhaustible, they should only be exploited with the maximum return for the economy of the country where they are mined\".", "We must move quickly to a shared sustainable energy supply, sustainable transportation and clean air."]}, {"source_sentence": "Their track record shows they do not support Australia<92>s traditional industries because they are constantly pandering to the Greens.", "sentences": ["An economic dynamic based on the sustainable development of national potential, equitable access to the means of production, social justice, environmental conservation, the incorporation of added value, the promotion of competitiveness and self-management,", "the cry \"El campo no aguanta más\" (The countryside can't take it anymore), of the peasant movement and its proclamation of \"Salvemos al Campo para salvar a México\" (Let's save the countryside to save Mexico);", "On the other hand, increasing defence capacity is directly related to the involvement of all citizens in appropriate programmes, which, together with the acquisition of skills, experience and organisation, also contribute to forging a spirit of militancy and collectivity."]}, {"source_sentence": "We will prepare the proposals of the United Nations Declaration on the Rights of the Child in line with the commitments made.", "sentences": ["For the presentation of Czech culture, we will also use the upcoming major anniversaries (100 years of the founding of Czechoslovakia, the 30th anniversary of the canonization of Agnes of Bohemia, 600 years since the birth of George of Poděbrady, etc.).", "Separate prison units for young people should be established, and special rehabilitation measures should be introduced in these units.", "Austrian citizenship is a valuable asset and should not become accessible to those who do not abide by the laws of our state."]}, {"source_sentence": "Third, CD&V wants to strengthen the social sustainability of our agriculture and horticulture sector.", "sentences": ["We will take a farm-level approach where possible so that low-emissions farmers are rewarded with a lower cost through the ETS, rather than the current approach that assumes each cow, for instance, has the same emissions on every farm.", "In addition, 20 billion euros in tax revenues are fraudulently evaded every year (the equivalent of the healthcare budget).", "87 percent of arrested undocumented migrants are released sooner or later, but without papers, in a lawless situation."]}, {"source_sentence": "This incites social hatred, threatens economic and social stability, and undermines trust in the authorities.", "sentences": [" The conditions for a healthy entrepreneurship, where the most innovative and creative win and where the source of enrichment cannot be property speculation or guilds and networks. ", "According to statistics from the Attorney General's Office, since February 2005, when the implementation of the PSD was announced, the rate of violent deaths per 100,000 inhabitants has dropped from 26.41 in December 2005 to 18.43 in December 2007.", "As a result, the profits of the oligarchs are more than 400 times what our entire country gets from the exploitation of natural resources."]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,255 |
ocaklisemih/sekerlipencere-zynpdata-mistral-7b-summarization
|
ocaklisemih
|
summarization
|
[
"transformers",
"safetensors",
"summarization",
"turkish",
"mistral",
"causal-lm",
"tr",
"dataset:sekerlipencere/zynpdata-zynp_ai-teknofest",
"license:mit",
"endpoints_compatible",
"region:us"
] | 2024-08-17T19:17:02Z |
2024-08-17T20:27:53+00:00
| 21 | 1 |
---
datasets:
- sekerlipencere/zynpdata-zynp_ai-teknofest
language:
- tr
library_name: transformers
license: mit
pipeline_tag: summarization
tags:
- summarization
- turkish
- mistral
- causal-lm
---
# Zynp AI Teknofest Cevap Özetleme Modeli
Bu model, **Mistral-7B** temel alınarak Türkçe dilinde özetleme görevleri için ince ayar yapılmıştır. Model, belirli bir soruya verilen uzun cevapları özetleyerek daha kısa ve anlaşılır bir bilgi sağlar. Özellikle Türkçe metinleri işlemek için optimize edilmiştir.
## Veri Seti
Model, zynpdata-zynp_ai-teknofest: Türkiye'nin En Büyük Açık Kaynaklı Türkçe Veri Seti kullanarak eğitilmiştir. Veri seti hakkında daha fazla bilgi ve veri setinin nasıl kullanılacağıyla ilgili detaylar için [bu bağlantıya](https://sekerlipencere.com.tr/posts/zynpdata-turkiyenin-en-buyuk-acik-kaynakli-turkce-veri-seti/) göz atabilirsiniz.
## Modelin Kullanımı
Bu modelin kullanımı oldukça basittir. Aşağıdaki Python kodu ile modelinizi yükleyebilir ve test edebilirsiniz:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
# Modeli ve tokenizer'ı yükleyin
model_name = "ocaklisemih/sekerlipencere/zynpdata-mistral-7b-summarization"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
# Örnek giriş metni
input_text = """<s>[INST]Soru: CS:GO FPS nasıl arttırılır?
Hocam çoklu CPU kullanımını ayarlardan kapattıysanız aktif edince 4 5 FPS artar.CS:GO görüntü ayarlarında Uber gölgelendirici kullan komutunu hayır yapmanız öneririm dikey eşitleme FPS'ini sabitler bundan dolayı yüksek FPS değerleri almana mani olur.[/INST]
Özet:
"""
# Giriş metnini tokenizasyon işlemi
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
input_ids = inputs["input_ids"]
# Modelle özetleme işlemi
output = model.generate(input_ids, max_new_tokens=150)
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(output_text)
```
## Eğitim Detayları
Bu model, aşağıdaki ayarlarla eğitilmiştir:
* Model: Mistral-7B
* Veri Kümesi: sekerlipencere-zynpdata-zynp_ai-teknofest
* Eğitim Süresi: 3 epoch
* Hiperparametreler:
* Öğrenme Oranı: 2e-4
* Toplam Adım: 10,000
* Batch Boyutu: 4
* Gradient Accumulation: 8
* Optimizasyon: LoRA (Low-Rank Adaptation)
* Kayıp Fonksiyonu: Causal Language Modeling (CLM)
* Model, LoRA yöntemi kullanılarak düşük rank adaptasyonu ile eğitildi ve daha verimli bir şekilde büyük dil modelleri üzerinde ince ayar yapıldı.
## Modelin Özellikleri
* Dil: Türkçe
* Görev: Özetleme (Summarization)
* Model Boyutu: 7B parametre
* Quantization: 4-bit NF4 quantization ile optimize edilmiştir.
## Atıf
```bibtex
@misc{zynpdata2024,
author = {sekerlipencere},
title = {zynpdata: Türkiye'nin En Büyük Açık Kaynaklı Türkçe Forum Veri Seti},
year = {2024},
publisher = {GitHub},
journal = {GitHub Repository},
howpublished = {\url{https://github.com/sekerlipencere/zynpdata-zynp_ai-teknofest}}
}
```
| null |
Non_BioNLP
|
# Zynp AI Teknofest Cevap Özetleme Modeli
Bu model, **Mistral-7B** temel alınarak Türkçe dilinde özetleme görevleri için ince ayar yapılmıştır. Model, belirli bir soruya verilen uzun cevapları özetleyerek daha kısa ve anlaşılır bir bilgi sağlar. Özellikle Türkçe metinleri işlemek için optimize edilmiştir.
## Veri Seti
Model, zynpdata-zynp_ai-teknofest: Türkiye'nin En Büyük Açık Kaynaklı Türkçe Veri Seti kullanarak eğitilmiştir. Veri seti hakkında daha fazla bilgi ve veri setinin nasıl kullanılacağıyla ilgili detaylar için [bu bağlantıya](https://sekerlipencere.com.tr/posts/zynpdata-turkiyenin-en-buyuk-acik-kaynakli-turkce-veri-seti/) göz atabilirsiniz.
## Modelin Kullanımı
Bu modelin kullanımı oldukça basittir. Aşağıdaki Python kodu ile modelinizi yükleyebilir ve test edebilirsiniz:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
# Modeli ve tokenizer'ı yükleyin
model_name = "ocaklisemih/sekerlipencere/zynpdata-mistral-7b-summarization"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
# Örnek giriş metni
input_text = """<s>[INST]Soru: CS:GO FPS nasıl arttırılır?
Hocam çoklu CPU kullanımını ayarlardan kapattıysanız aktif edince 4 5 FPS artar.CS:GO görüntü ayarlarında Uber gölgelendirici kullan komutunu hayır yapmanız öneririm dikey eşitleme FPS'ini sabitler bundan dolayı yüksek FPS değerleri almana mani olur.[/INST]
Özet:
"""
# Giriş metnini tokenizasyon işlemi
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
input_ids = inputs["input_ids"]
# Modelle özetleme işlemi
output = model.generate(input_ids, max_new_tokens=150)
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(output_text)
```
## Eğitim Detayları
Bu model, aşağıdaki ayarlarla eğitilmiştir:
* Model: Mistral-7B
* Veri Kümesi: sekerlipencere-zynpdata-zynp_ai-teknofest
* Eğitim Süresi: 3 epoch
* Hiperparametreler:
* Öğrenme Oranı: 2e-4
* Toplam Adım: 10,000
* Batch Boyutu: 4
* Gradient Accumulation: 8
* Optimizasyon: LoRA (Low-Rank Adaptation)
* Kayıp Fonksiyonu: Causal Language Modeling (CLM)
* Model, LoRA yöntemi kullanılarak düşük rank adaptasyonu ile eğitildi ve daha verimli bir şekilde büyük dil modelleri üzerinde ince ayar yapıldı.
## Modelin Özellikleri
* Dil: Türkçe
* Görev: Özetleme (Summarization)
* Model Boyutu: 7B parametre
* Quantization: 4-bit NF4 quantization ile optimize edilmiştir.
## Atıf
```bibtex
@misc{zynpdata2024,
author = {sekerlipencere},
title = {zynpdata: Türkiye'nin En Büyük Açık Kaynaklı Türkçe Forum Veri Seti},
year = {2024},
publisher = {GitHub},
journal = {GitHub Repository},
howpublished = {\url{https://github.com/sekerlipencere/zynpdata-zynp_ai-teknofest}}
}
```
|
{"datasets": ["sekerlipencere/zynpdata-zynp_ai-teknofest"], "language": ["tr"], "library_name": "transformers", "license": "mit", "pipeline_tag": "summarization", "tags": ["summarization", "turkish", "mistral", "causal-lm"]}
|
task
|
[
"SUMMARIZATION"
] | 42,256 |
phospho-app/MODEL_ID
|
phospho-app
|
feature-extraction
|
[
"transformers",
"safetensors",
"mpnet",
"feature-extraction",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-05-02T17:37:38Z |
2024-05-02T17:37:52+00:00
| 7 | 0 |
---
language: en
license: apache-2.0
---
# phospho-small
This is a SetFit model that can be used for Text Classification on CPU.
The model has been trained using an efficient few-shot learning technique.
## Usage
```python
from setfit import SetFitModel
model = SetFitModel.from_pretrained("MODEL_ID")
outputs = model.predict(["This is a sentence to classify", "Another sentence"])
# tensor([1, 0])
```
## References
This work was possible thanks to the SetFit library and the work of:
Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren (2022). Efficient Few-Shot Learning Without Prompts.
ArXiv: [https://doi.org/10.48550/arxiv.2209.11055](https://doi.org/10.48550/arxiv.2209.11055)
| null |
Non_BioNLP
|
# phospho-small
This is a SetFit model that can be used for Text Classification on CPU.
The model has been trained using an efficient few-shot learning technique.
## Usage
```python
from setfit import SetFitModel
model = SetFitModel.from_pretrained("MODEL_ID")
outputs = model.predict(["This is a sentence to classify", "Another sentence"])
# tensor([1, 0])
```
## References
This work was possible thanks to the SetFit library and the work of:
Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren (2022). Efficient Few-Shot Learning Without Prompts.
ArXiv: [https://doi.org/10.48550/arxiv.2209.11055](https://doi.org/10.48550/arxiv.2209.11055)
|
{"language": "en", "license": "apache-2.0"}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,257 |
RichardErkhov/deepset_-_roberta-base-squad2-4bits
|
RichardErkhov
|
text-generation
|
[
"transformers",
"safetensors",
"roberta",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | 2024-05-02T06:39:48Z |
2024-05-02T06:41:09+00:00
| 4 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
roberta-base-squad2 - bnb 4bits
- Model creator: https://huggingface.co/deepset/
- Original model: https://huggingface.co/deepset/roberta-base-squad2/
Original model description:
---
language: en
license: cc-by-4.0
datasets:
- squad_v2
model-index:
- name: deepset/roberta-base-squad2
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_v2
type: squad_v2
config: squad_v2
split: validation
metrics:
- type: exact_match
value: 79.9309
name: Exact Match
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDhhNjg5YzNiZGQ1YTIyYTAwZGUwOWEzZTRiYzdjM2QzYjA3ZTUxNDM1NjE1MTUyMjE1MGY1YzEzMjRjYzVjYiIsInZlcnNpb24iOjF9.EH5JJo8EEFwU7osPz3s7qanw_tigeCFhCXjSfyN0Y1nWVnSfulSxIk_DbAEI5iE80V4EKLyp5-mYFodWvL2KDA
- type: f1
value: 82.9501
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjk5ZDYwOGQyNjNkMWI0OTE4YzRmOTlkY2JjNjQ0YTZkNTMzMzNkYTA0MDFmNmI3NjA3NjNlMjhiMDQ2ZjJjNSIsInZlcnNpb24iOjF9.DDm0LNTkdLbGsue58bg1aH_s67KfbcmkvL-6ZiI2s8IoxhHJMSf29H_uV2YLyevwx900t-MwTVOW3qfFnMMEAQ
- type: total
value: 11869
name: total
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGFkMmI2ODM0NmY5NGNkNmUxYWViOWYxZDNkY2EzYWFmOWI4N2VhYzY5MGEzMTVhOTU4Zjc4YWViOGNjOWJjMCIsInZlcnNpb24iOjF9.fexrU1icJK5_MiifBtZWkeUvpmFISqBLDXSQJ8E6UnrRof-7cU0s4tX_dIsauHWtUpIHMPZCf5dlMWQKXZuAAA
- task:
type: question-answering
name: Question Answering
dataset:
name: squad
type: squad
config: plain_text
split: validation
metrics:
- type: exact_match
value: 85.289
name: Exact Match
- type: f1
value: 91.841
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: adversarial_qa
type: adversarial_qa
config: adversarialQA
split: validation
metrics:
- type: exact_match
value: 29.500
name: Exact Match
- type: f1
value: 40.367
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_adversarial
type: squad_adversarial
config: AddOneSent
split: validation
metrics:
- type: exact_match
value: 78.567
name: Exact Match
- type: f1
value: 84.469
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: squadshifts amazon
type: squadshifts
config: amazon
split: test
metrics:
- type: exact_match
value: 69.924
name: Exact Match
- type: f1
value: 83.284
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: squadshifts new_wiki
type: squadshifts
config: new_wiki
split: test
metrics:
- type: exact_match
value: 81.204
name: Exact Match
- type: f1
value: 90.595
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: squadshifts nyt
type: squadshifts
config: nyt
split: test
metrics:
- type: exact_match
value: 82.931
name: Exact Match
- type: f1
value: 90.756
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: squadshifts reddit
type: squadshifts
config: reddit
split: test
metrics:
- type: exact_match
value: 71.550
name: Exact Match
- type: f1
value: 82.939
name: F1
---
# roberta-base for QA
This is the [roberta-base](https://huggingface.co/roberta-base) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering.
## Overview
**Language model:** roberta-base
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0
**Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system)
**Infrastructure**: 4x Tesla v100
## Hyperparameters
```
batch_size = 96
n_epochs = 2
base_LM_model = "roberta-base"
max_seq_len = 386
learning_rate = 3e-5
lr_schedule = LinearWarmup
warmup_proportion = 0.2
doc_stride=128
max_query_length=64
```
## Using a distilled model instead
Please note that we have also released a distilled version of this model called [deepset/tinyroberta-squad2](https://huggingface.co/deepset/tinyroberta-squad2). The distilled model has a comparable prediction quality and runs at twice the speed of the base model.
## Usage
### In Haystack
Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2")
# or
reader = TransformersReader(model_name_or_path="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2")
```
For a complete example of ``roberta-base-squad2`` being used for Question Answering, check out the [Tutorials in Haystack Documentation](https://haystack.deepset.ai/tutorials/first-qa-system)
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/roberta-base-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Performance
Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
```
"exact": 79.87029394424324,
"f1": 82.91251169582613,
"total": 11873,
"HasAns_exact": 77.93522267206478,
"HasAns_f1": 84.02838248389763,
"HasAns_total": 5928,
"NoAns_exact": 81.79983179142137,
"NoAns_f1": 81.79983179142137,
"NoAns_total": 5945
```
## Authors
**Branden Chan:** [email protected]
**Timo Möller:** [email protected]
**Malte Pietsch:** [email protected]
**Tanay Soni:** [email protected]
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
</div>
</div>
[deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")](https://huggingface.co/deepset/tinyroberta-squad2)
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p>
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
| null |
Non_BioNLP
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
roberta-base-squad2 - bnb 4bits
- Model creator: https://huggingface.co/deepset/
- Original model: https://huggingface.co/deepset/roberta-base-squad2/
Original model description:
---
language: en
license: cc-by-4.0
datasets:
- squad_v2
model-index:
- name: deepset/roberta-base-squad2
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_v2
type: squad_v2
config: squad_v2
split: validation
metrics:
- type: exact_match
value: 79.9309
name: Exact Match
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDhhNjg5YzNiZGQ1YTIyYTAwZGUwOWEzZTRiYzdjM2QzYjA3ZTUxNDM1NjE1MTUyMjE1MGY1YzEzMjRjYzVjYiIsInZlcnNpb24iOjF9.EH5JJo8EEFwU7osPz3s7qanw_tigeCFhCXjSfyN0Y1nWVnSfulSxIk_DbAEI5iE80V4EKLyp5-mYFodWvL2KDA
- type: f1
value: 82.9501
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjk5ZDYwOGQyNjNkMWI0OTE4YzRmOTlkY2JjNjQ0YTZkNTMzMzNkYTA0MDFmNmI3NjA3NjNlMjhiMDQ2ZjJjNSIsInZlcnNpb24iOjF9.DDm0LNTkdLbGsue58bg1aH_s67KfbcmkvL-6ZiI2s8IoxhHJMSf29H_uV2YLyevwx900t-MwTVOW3qfFnMMEAQ
- type: total
value: 11869
name: total
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGFkMmI2ODM0NmY5NGNkNmUxYWViOWYxZDNkY2EzYWFmOWI4N2VhYzY5MGEzMTVhOTU4Zjc4YWViOGNjOWJjMCIsInZlcnNpb24iOjF9.fexrU1icJK5_MiifBtZWkeUvpmFISqBLDXSQJ8E6UnrRof-7cU0s4tX_dIsauHWtUpIHMPZCf5dlMWQKXZuAAA
- task:
type: question-answering
name: Question Answering
dataset:
name: squad
type: squad
config: plain_text
split: validation
metrics:
- type: exact_match
value: 85.289
name: Exact Match
- type: f1
value: 91.841
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: adversarial_qa
type: adversarial_qa
config: adversarialQA
split: validation
metrics:
- type: exact_match
value: 29.500
name: Exact Match
- type: f1
value: 40.367
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_adversarial
type: squad_adversarial
config: AddOneSent
split: validation
metrics:
- type: exact_match
value: 78.567
name: Exact Match
- type: f1
value: 84.469
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: squadshifts amazon
type: squadshifts
config: amazon
split: test
metrics:
- type: exact_match
value: 69.924
name: Exact Match
- type: f1
value: 83.284
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: squadshifts new_wiki
type: squadshifts
config: new_wiki
split: test
metrics:
- type: exact_match
value: 81.204
name: Exact Match
- type: f1
value: 90.595
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: squadshifts nyt
type: squadshifts
config: nyt
split: test
metrics:
- type: exact_match
value: 82.931
name: Exact Match
- type: f1
value: 90.756
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: squadshifts reddit
type: squadshifts
config: reddit
split: test
metrics:
- type: exact_match
value: 71.550
name: Exact Match
- type: f1
value: 82.939
name: F1
---
# roberta-base for QA
This is the [roberta-base](https://huggingface.co/roberta-base) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering.
## Overview
**Language model:** roberta-base
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0
**Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system)
**Infrastructure**: 4x Tesla v100
## Hyperparameters
```
batch_size = 96
n_epochs = 2
base_LM_model = "roberta-base"
max_seq_len = 386
learning_rate = 3e-5
lr_schedule = LinearWarmup
warmup_proportion = 0.2
doc_stride=128
max_query_length=64
```
## Using a distilled model instead
Please note that we have also released a distilled version of this model called [deepset/tinyroberta-squad2](https://huggingface.co/deepset/tinyroberta-squad2). The distilled model has a comparable prediction quality and runs at twice the speed of the base model.
## Usage
### In Haystack
Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2")
# or
reader = TransformersReader(model_name_or_path="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2")
```
For a complete example of ``roberta-base-squad2`` being used for Question Answering, check out the [Tutorials in Haystack Documentation](https://haystack.deepset.ai/tutorials/first-qa-system)
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/roberta-base-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Performance
Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
```
"exact": 79.87029394424324,
"f1": 82.91251169582613,
"total": 11873,
"HasAns_exact": 77.93522267206478,
"HasAns_f1": 84.02838248389763,
"HasAns_total": 5928,
"NoAns_exact": 81.79983179142137,
"NoAns_f1": 81.79983179142137,
"NoAns_total": 5945
```
## Authors
**Branden Chan:** [email protected]
**Timo Möller:** [email protected]
**Malte Pietsch:** [email protected]
**Tanay Soni:** [email protected]
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
</div>
</div>
[deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")](https://huggingface.co/deepset/tinyroberta-squad2)
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p>
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
{}
|
task
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 42,258 |
openthaigpt/openthaigpt-1.0.0-70b-chat
|
openthaigpt
|
text-generation
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"openthaigpt",
"th",
"en",
"arxiv:2411.07238",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-04-05T11:55:28Z |
2024-11-15T04:56:19+00:00
| 221 | 11 |
---
language:
- th
- en
library_name: transformers
license: llama2
pipeline_tag: text-generation
tags:
- openthaigpt
- llama
---
# 🇹🇭 OpenThaiGPT 70b 1.0.0

[More Info](https://openthaigpt.aieat.or.th/)
🇹🇭 **OpenThaiGPT 70b Version 1.0.0** is an advanced 70-billion-parameter Thai language chat model based on LLaMA v2 released on April 8, 2024. It has been specifically fine-tuned for Thai instructions and enhanced by incorporating over 10,000 of the most commonly used Thai words into the large language model's (LLM) dictionary, significantly boosting its response speed.
## Highlights
- **Leading-edge Thai language LLM**, setting new benchmarks by achieving the highest average scores across several Thai language exams when compared to all other open-source Thai LLMs.
- **The First 70b Thai opensource LLM**, achieving the higher score on Thai exams than OpenAI GPT 3.5, Google Gemini, and Claude 3 Haiku.
- **Support for extended conversations** across multiple turns.
- Support the use case of **Retrieval Augmented Generation (RAG)** for enriched response generation.
- **Generation speeds increased by tenfold**, thanks to the addition of 10,000 frequently used Thai words to the model's dictionary.
- Pretrained upon a foundation of **more than 65 billion Thai language words** and meticulously fine-tuned with over 1 million Thai instruction examples.
- Capable of understanding and processing **input contexts of up to 4096 Thai words**, allowing for detailed and complex instructions.
## Benchmark on OpenThaiGPT Eval
** Please take a look at ``OTG 13b (April 2024)`` for this model's evaluation result.
| **Exams** | **OTG 7b (Aug 2023)** | **OTG 13b (Dec 2023)** | **OTG 7b (April 2024)** | **OTG 13b (April 2024)** | <b style="color:blue">OTG 70b (April 2024)</b> | **SeaLLM 7b v1** | **SeaLLM 7b v2** | **SeaLion 7b** | **WanchanGLM 7b** | **Sailor-7b-Chat** | **TyphoonGPT 7b Instruct** | **GPT3.5** | **GPT4** | **Gemini Pro** | **Gemini 1.5** | **Claude 3 Haiku** | **Claude 3 Sonnet** | **Claude 3 Opus** |
|----------------------------|-----------------------|------------------------|-------------------------|--------------------------|--------------------------|------------------|------------------|----------------|-------------------|--------------------|----------------------------|------------|----------|----------------|----------------|--------------------|---------------------|-------------------|
| **A-Level** | 17.50% | 34.17% | 25.00% | 30.83% | <b style="color:blue">45.83%</b> | 18.33% | 34.17% | 21.67% | 17.50% | 40.00% | 37.50% | 38.33% | 65.83% | 56.67% | 55.83% | 58.33% | 59.17% | 77.50% |
| **TGAT** | 24.00% | 22.00% | 22.00% | 36.00% | <b style="color:blue">36.00%</b> | 14.00% | 28.00% | 24.00% | 16.00% | 34.00% | 30.00% | 28.00% | 44.00% | 22.00% | 28.00% | 36.00% | 34.00% | 46.00% |
| **TPAT1** | 22.50% | 47.50% | 42.50% | 27.50% | <b style="color:blue">62.50%</b> | 22.50% | 27.50% | 22.50% | 17.50% | 40.00% | 47.50% | 45.00% | 52.50% | 52.50% | 50.00% | 52.50% | 50.00% | 62.50% |
| **thai_investment_consultant_exams** | 8.00% | 28.00% | 76.00% | 84.00% | <b style="color:blue">68.00%</b> | 16.00% | 28.00% | 24.00% | 16.00% | 24.00% | 32.00% | 40.00% | 64.00% | 52.00% | 32.00% | 44.00% | 64.00% | 72.00% |
| **facebook_beleble_tha_200** | 25.00% | 45.00% | 34.50% | 39.50% | <b style="color:blue">70.00%</b> | 13.50% | 51.00% | 27.00% | 24.50% | 63.00% | 51.50% | 50.00% | 72.50% | 65.00% | 74.00% | 63.50% | 77.00% | 90.00% |
| **xcopa_th_200** | 45.00% | 56.50% | 49.50% | 51.50% | <b style="color:blue">74.50%</b> | 26.50% | 47.00% | 51.50% | 48.50% | 68.50% | 65.00% | 64.00% | 82.00% | 68.00% | 74.00% | 64.00% | 80.00% | 86.00% |
| **xnli2.0_th_200** | 33.50% | 34.50% | 39.50% | 31.00% | <b style="color:blue">47.00%</b> | 21.00% | 43.00% | 37.50% | 33.50% | 16.00% | 20.00% | 50.00% | 69.00% | 53.00% | 54.50% | 50.00% | 68.00% | 68.50% |
| **ONET M3** | 17.85% | 38.86% | 34.11% | 39.36% | <b style="color:blue">56.15%</b> | 15.58% | 23.92% | 21.79% | 19.56% | 21.37% | 28.03% | 37.91% | 49.97% | 55.99% | 57.41% | 52.73% | 40.60% | 63.87% |
| **ONET M6** | 21.14% | 28.87% | 22.53% | 23.32% | <b style="color:blue">42.85%</b> | 15.09% | 19.48% | 16.96% | 20.67% | 28.64% | 27.46% | 34.44% | 46.29% | 45.53% | 50.23% | 34.79% | 38.49% | 48.56% |
| **AVERAGE SCORE** | 23.83% | 37.27% | 38.40% | 40.33% | <b style="color:blue;font-size:1.3em">55.87%</b> | 18.06% | 33.56% | 27.44% | 23.75% | 37.28% | 37.67% | 43.07% | 60.68% | 52.30% | 52.89% | 50.65% | 56.81% | 68.32% |
Thai language multiple choice exams, Test on unseen test set, Zero-shot learning. Benchmark source code and exams information: https://github.com/OpenThaiGPT/openthaigpt_eval
(Updated on: 7 April 2024)
## Benchmark on M3Exam evaluated by an external party (Float16.cloud)
| **Models** | **ENGLISH (M3EXAM)** | **THAI (M3EXAM)** |
|---------------------|------------------|---------------|
| OTG-7b | 40.92 % | 25.14 % |
| OTG-13b | 53.69 % | 36.49 % |
| <b style="color:blue">OTG-70b</b> | <b style="color:blue">72.58 %</b> | <b style="color:blue">48.29 %</b> |
| GPT-3.5-turbo-0613* | - | 34.1 % |
| GPT-4-0613* | - | 56.0 % |
More information: https://blog.float16.cloud/the-first-70b-thai-llm/
## Licenses
**Source Code**: License Apache Software License 2.0.<br>
**Weight**: Research and **Commercial uses**.<br>
## Sponsors
<img src="https://hf.fast360.xyz/production/uploads/5fcd9c426d942eaf4d1ebd30/FDC9WYN2iykQbVW1rY4q5.png" width="600px">
## Supports
- Official website: https://openthaigpt.aieat.or.th
- Facebook page: https://web.facebook.com/groups/openthaigpt
- A Discord server for discussion and support [here](https://discord.gg/rUTp6dfVUF)
- E-mail: [email protected]
## Prompt Format
Prompt format is based on Llama2 with a small modification (Adding "###" to specify the context part)
```
<s>[INST] <<SYS>
{system_prompt}
<</SYS>>
{human_turn1}###{context_turn1} [/INST]{assistant_turn1}</s><s>{human_turn2}###{context_turn2} [/INST] ...
```
### System prompt:
```
You are a question answering assistant. Answer the question as truthful and helpful as possible คุณคือผู้ช่วยตอบคำถาม จงตอบคำถามอย่างถูกต้องและมีประโยชน์ที่สุด
```
### Examples
#### Single Turn Conversation Example
```
<s>[INST] <<SYS>
You are a question answering assistant. Answer the question as truthful and helpful as possible คุณคือผู้ช่วยตอบคำถาม จงตอบคำถามอย่างถูกต้องและมีประโยชน์ที่สุด
<</SYS>>
สวัสดีครับ [/INST]
```
#### Single Turn Conversation with Context (RAG) Example
```
<s>[INST] <<SYS>
You are a question answering assistant. Answer the question as truthful and helpful as possible คุณคือผู้ช่วยตอบคำถาม จงตอบคำถามอย่างถูกต้องและมีประโยชน์ที่สุด
<</SYS>>
กรุงเทพมีพื้นที่เท่าไร่###กรุงเทพมหานคร เป็นเมืองหลวง นครและมหานครที่มีประชากรมากที่สุดของประเทศไทย กรุงเทพมหานครมีพื้นที่ทั้งหมด 1,568.737 ตร.กม. มีประชากรตามทะเบียนราษฎรกว่า 8 ล้านคน [/INST]
```
#### Multi Turn Conversation Example
##### First turn
```
<s>[INST] <<SYS>
You are a question answering assistant. Answer the question as truthful and helpful as possible คุณคือผู้ช่วยตอบคำถาม จงตอบคำถามอย่างถูกต้องและมีประโยชน์ที่สุด
<</SYS>>
สวัสดีครับ [/INST]
```
##### Second turn
```
<s>[INST] <<SYS>
You are a question answering assistant. Answer the question as truthful and helpful as possible คุณคือผู้ช่วยตอบคำถาม จงตอบคำถามอย่างถูกต้องและมีประโยชน์ที่สุด
<</SYS>>
สวัสดีครับ [/INST]สวัสดีค่ะ มีคำถามอะไร ถามได้เลย</s><s>ขอสูตรทำส้มตำหน่อย [/INST]
```
##### Third turn
```
<s>[INST] <<SYS>
You are a question answering assistant. Answer the question as truthful and helpful as possible คุณคือผู้ช่วยตอบคำถาม จงตอบคำถามอย่างถูกต้องและมีประโยชน์ที่สุด
<</SYS>>
สวัสดีครับ [/INST]สวัสดีค่ะ มีคำถามอะไร ถามได้เลย</s><s>ขอสูตรทำส้มตำหน่อย [/INST]ได้เลยค่ะ ส้มตำเป็นเมนูที่ทำง่ายและอร่อย มาเริ่มกันเลยนะคะ</s><s>เริ่มได้เลย [/INST]
```
##### Fourth turn
```
<s>[INST] <<SYS>
You are a question answering assistant. Answer the question as truthful and helpful as possible คุณคือผู้ช่วยตอบคำถาม จงตอบคำถามอย่างถูกต้องและมีประโยชน์ที่สุด
<</SYS>>
สวัสดีครับ [/INST]สวัสดีค่ะ มีคำถามอะไร ถามได้เลย</s><s>ขอสูตรทำส้มตำหน่อย [/INST]ได้เลยค่ะ ส้มตำเป็นเมนูที่ทำง่ายและอร่อย มาเริ่มกันเลยนะคะ</s><s>เริ่มได้เลย [/INST]
ตำไทยเป็นเมนูที่ใครๆ ก็คุ้นเคย ด้วยรสชาติที่กลมกล่อม หวาน เปรี้ยว เค็ม เผ็ด ครบรส วันนี้เราจะมาทำส้มตำด้วยกันค่ะ สิ่งแรกที่ต้องมีคือเครื่องปรุงดังนี้
- มะละกอดิบ 1 ลูก
- กุ้งแห้ง 1/2 ถ้วย
- กระเทียม 3 กลีบ
- พริกขี้หนู 3 เม็ด
- น้ำตาลปี๊บ 1 ช้อนโต๊ะ
- น้ำปลา 2 ช้อนโต๊ะ
- มะนาว 1 ลูก
- งาขาวคั่ว 1/4 ถ้วย
วิธีทำมีดังนี้ค่ะ
1. เริ่มจากล้างมะละกอให้สะอาด แล้วใช้มีดปอกเปลือก เอาไส้ออก หั่นเป็นเส้นบางๆ เตรียมไว้
2. ตำกุ้งแห้งให้ละเอียด แล้วตักขึ้นพักไว้
3. ใช้ครกหินหรือเครื่องปั่น บดพริกขี้หนูกับกระเทียมให้ละเอียด
4. ใส่กุ้งแห้งที่ตำแล้วลงไปผสม ตามด้วยน้ำตาลปี๊บ น้ำปลา มะนาว และเส้นมะละกอ คลุกเคล้าให้เข้ากัน
5. ชิมรสให้ได้รสหวาน เปรี้ยว เค็ม เผ็ด จากนั้นตักขึ้นเสิร์ฟพร้อมผักสด อาทิ กะหล่ำปลี ถั่วงอก แครอท ผักบุ้ง</s><s>ขอบคุณครับ [/INST]
```
#### Multi Turn Conversation with Context (RAG) Example
```
<s>[INST] <<SYS>
You are a question answering assistant. Answer the question as truthful and helpful as possible คุณคือผู้ช่วยตอบคำถาม จงตอบคำถามอย่างถูกต้องและมีประโยชน์ที่สุด
<</SYS>>
กรุงเทพมีพื้นที่เท่าไร่###กรุงเทพมหานคร เป็นเมืองหลวง นครและมหานครที่มีประชากรมากที่สุดของประเทศไทย กรุงเทพมหานครมีพื้นที่ทั้งหมด 1,568.737 ตร.กม. มีประชากรตามทะเบียนราษฎรกว่า 8 ล้านคน [/INST]
กรุงเทพมหานครมีพื้นที่ทั้งหมด 1,568.737 ตร.กม.</s><s>และประชากรล่ะ [/INST]
```
## How to use
### Huggingface
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Ensure CUDA is available
device = 'cuda' if torch.cuda.is_available() else 'cpu'
print(f"Using device: {device}")
# Init Model
model_path="openthaigpt/openthaigpt-1.0.0-7b-chat"
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, torch_dtype=torch.float16)
model.to(device)
# Prompt
prompt = "สวัสดีครับ OpenThaiGPT"
llama_prompt = f"<s>[INST] <<SYS>>\nYou are a question answering assistant. Answer the question as truthful and helpful as possible คุณคือผู้ช่วยตอบคำถาม จงตอบคำถามอย่างถูกต้องและมีประโยชน์ที่สุด<</SYS>>\n\n{prompt} [/INST]"
inputs = tokenizer.encode(llama_prompt, return_tensors="pt")
inputs = inputs.to(device)
# Generate
outputs = model.generate(inputs, max_length=512, num_return_sequences=1)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
### vLLM
1. Install VLLM (https://github.com/vllm-project/vllm)
2. Run server
```bash
python -m vllm.entrypoints.api_server --model /path/to/model --tensor-parallel-size num_gpus
```
3. Run inference (CURL example)
```bash
curl --request POST \
--url http://localhost:8000/generate \
--header "Content-Type: application/json" \
--data '{"prompt": "<s>[INST] <<SYS>>\nYou are a question answering assistant. Answer the question as truthful and helpful as possible คุณคือผู้ช่วยตอบคำถาม จงตอบคำถามอย่างถูกต้องและมีประโยชน์ที่สุด\n<</SYS>>\n\nอยากลดความอ้วนต้องทำอย่างไร [/INST]","use_beam_search": false, "temperature": 0.1, "max_tokens": 512, "top_p": 0.75, "top_k": 40, "frequency_penalty": 0.3 "stop": "</s>"}'
```
### LlamaCPP (for GGUF)
1. Build and Install LlamaCPP (LLAMA_CUBLAS=1 is for GPU inference)
```bash
git clone https://github.com/ggerganov/llama.cpp.git \
&& cd llama.cpp \
&& make -j LLAMA_CUBLAS=1 CUDA_DOCKER_ARCH=all
```
2. Run server
```bash
./server -m /path/to/ggml-model-f16.gguf -c 3072 -ngl 81 -ts 1,1 --host 0.0.0.0
```
3. Run inference (CURL example)
```bash
curl --location 'http://localhost:8000/completion' \
--header 'Content-Type: application/json' \
--data '{
"prompt":"<s>[INST] <<SYS>>\nYou are a question answering assistant. Answer the question as truthful and helpful as possible คุณคือผู้ช่วยตอบคำถาม จงตอบคำถามอย่างถูกต้องและมีประโยชน์ที่สุด friendly\n\n<<SYS>>\n\nอยากลดความอ้วนต้องทำอย่างไร [/INST]",
"max_tokens": 512,
"stop":"</s>"
}'
```
### GPU Memory Requirements
| **Number of Parameters** | **FP 16 bits** | **8 bits (Quantized)** | **4 bits (Quantized)** | **Example Graphic Card for 4 bits** |
|------------------|----------------|------------------------|------------------------|---------------------------------------------|
| **7b** | 24 GB | 12 GB | 6 GB | Nvidia RTX 4060 8GB |
| **13b** | 48 GB | 24 GB | 12 GB | Nvidia RTX 4070 16GB |
| **70b** | 192 GB | 96 GB | 48 GB | Nvidia RTX 4090 24GB x 2 cards |
### OpenThaiGPT Team
* Kobkrit Viriyayudhakorn ([email protected])
* Sumeth Yuenyong ([email protected])
* Thaweewat Rugsujarit ([email protected])
* Jillaphat Jaroenkantasima ([email protected])
* Norapat Buppodom ([email protected])
* Koravich Sangkaew ([email protected])
* Peerawat Rojratchadakorn ([email protected])
* Surapon Nonesung ([email protected])
* Chanon Utupon ([email protected])
* Sadhis Wongprayoon ([email protected])
* Nucharee Thongthungwong ([email protected])
* Chawakorn Phiantham ([email protected])
* Patteera Triamamornwooth ([email protected])
* Nattarika Juntarapaoraya ([email protected])
* Kriangkrai Saetan ([email protected])
* Pitikorn Khlaisamniang ([email protected])
### Citation
If OpenThaiGPT has been beneficial for your work, kindly consider citing it as follows:
#### Bibtex
```bibtex
@misc{yuenyong2024openthaigpt15thaicentricopen,
title={OpenThaiGPT 1.5: A Thai-Centric Open Source Large Language Model},
author={Sumeth Yuenyong and Kobkrit Viriyayudhakorn and Apivadee Piyatumrong and Jillaphat Jaroenkantasima},
year={2024},
eprint={2411.07238},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2411.07238},
}
```
#### APA Style (for TXT, MS Word)
```
Yuenyong, S., Viriyayudhakorn, K., Piyatumrong, A., & Jaroenkantasima, J. (2024). OpenThaiGPT 1.5: A Thai-Centric Open Source Large Language Model. arXiv [Cs.CL]. Retrieved from http://arxiv.org/abs/2411.07238
```
<i>Disclaimer: Provided responses are not guaranteed.</i>
| null |
Non_BioNLP
|
# 🇹🇭 OpenThaiGPT 70b 1.0.0

[More Info](https://openthaigpt.aieat.or.th/)
🇹🇭 **OpenThaiGPT 70b Version 1.0.0** is an advanced 70-billion-parameter Thai language chat model based on LLaMA v2 released on April 8, 2024. It has been specifically fine-tuned for Thai instructions and enhanced by incorporating over 10,000 of the most commonly used Thai words into the large language model's (LLM) dictionary, significantly boosting its response speed.
## Highlights
- **Leading-edge Thai language LLM**, setting new benchmarks by achieving the highest average scores across several Thai language exams when compared to all other open-source Thai LLMs.
- **The First 70b Thai opensource LLM**, achieving the higher score on Thai exams than OpenAI GPT 3.5, Google Gemini, and Claude 3 Haiku.
- **Support for extended conversations** across multiple turns.
- Support the use case of **Retrieval Augmented Generation (RAG)** for enriched response generation.
- **Generation speeds increased by tenfold**, thanks to the addition of 10,000 frequently used Thai words to the model's dictionary.
- Pretrained upon a foundation of **more than 65 billion Thai language words** and meticulously fine-tuned with over 1 million Thai instruction examples.
- Capable of understanding and processing **input contexts of up to 4096 Thai words**, allowing for detailed and complex instructions.
## Benchmark on OpenThaiGPT Eval
** Please take a look at ``OTG 13b (April 2024)`` for this model's evaluation result.
| **Exams** | **OTG 7b (Aug 2023)** | **OTG 13b (Dec 2023)** | **OTG 7b (April 2024)** | **OTG 13b (April 2024)** | <b style="color:blue">OTG 70b (April 2024)</b> | **SeaLLM 7b v1** | **SeaLLM 7b v2** | **SeaLion 7b** | **WanchanGLM 7b** | **Sailor-7b-Chat** | **TyphoonGPT 7b Instruct** | **GPT3.5** | **GPT4** | **Gemini Pro** | **Gemini 1.5** | **Claude 3 Haiku** | **Claude 3 Sonnet** | **Claude 3 Opus** |
|----------------------------|-----------------------|------------------------|-------------------------|--------------------------|--------------------------|------------------|------------------|----------------|-------------------|--------------------|----------------------------|------------|----------|----------------|----------------|--------------------|---------------------|-------------------|
| **A-Level** | 17.50% | 34.17% | 25.00% | 30.83% | <b style="color:blue">45.83%</b> | 18.33% | 34.17% | 21.67% | 17.50% | 40.00% | 37.50% | 38.33% | 65.83% | 56.67% | 55.83% | 58.33% | 59.17% | 77.50% |
| **TGAT** | 24.00% | 22.00% | 22.00% | 36.00% | <b style="color:blue">36.00%</b> | 14.00% | 28.00% | 24.00% | 16.00% | 34.00% | 30.00% | 28.00% | 44.00% | 22.00% | 28.00% | 36.00% | 34.00% | 46.00% |
| **TPAT1** | 22.50% | 47.50% | 42.50% | 27.50% | <b style="color:blue">62.50%</b> | 22.50% | 27.50% | 22.50% | 17.50% | 40.00% | 47.50% | 45.00% | 52.50% | 52.50% | 50.00% | 52.50% | 50.00% | 62.50% |
| **thai_investment_consultant_exams** | 8.00% | 28.00% | 76.00% | 84.00% | <b style="color:blue">68.00%</b> | 16.00% | 28.00% | 24.00% | 16.00% | 24.00% | 32.00% | 40.00% | 64.00% | 52.00% | 32.00% | 44.00% | 64.00% | 72.00% |
| **facebook_beleble_tha_200** | 25.00% | 45.00% | 34.50% | 39.50% | <b style="color:blue">70.00%</b> | 13.50% | 51.00% | 27.00% | 24.50% | 63.00% | 51.50% | 50.00% | 72.50% | 65.00% | 74.00% | 63.50% | 77.00% | 90.00% |
| **xcopa_th_200** | 45.00% | 56.50% | 49.50% | 51.50% | <b style="color:blue">74.50%</b> | 26.50% | 47.00% | 51.50% | 48.50% | 68.50% | 65.00% | 64.00% | 82.00% | 68.00% | 74.00% | 64.00% | 80.00% | 86.00% |
| **xnli2.0_th_200** | 33.50% | 34.50% | 39.50% | 31.00% | <b style="color:blue">47.00%</b> | 21.00% | 43.00% | 37.50% | 33.50% | 16.00% | 20.00% | 50.00% | 69.00% | 53.00% | 54.50% | 50.00% | 68.00% | 68.50% |
| **ONET M3** | 17.85% | 38.86% | 34.11% | 39.36% | <b style="color:blue">56.15%</b> | 15.58% | 23.92% | 21.79% | 19.56% | 21.37% | 28.03% | 37.91% | 49.97% | 55.99% | 57.41% | 52.73% | 40.60% | 63.87% |
| **ONET M6** | 21.14% | 28.87% | 22.53% | 23.32% | <b style="color:blue">42.85%</b> | 15.09% | 19.48% | 16.96% | 20.67% | 28.64% | 27.46% | 34.44% | 46.29% | 45.53% | 50.23% | 34.79% | 38.49% | 48.56% |
| **AVERAGE SCORE** | 23.83% | 37.27% | 38.40% | 40.33% | <b style="color:blue;font-size:1.3em">55.87%</b> | 18.06% | 33.56% | 27.44% | 23.75% | 37.28% | 37.67% | 43.07% | 60.68% | 52.30% | 52.89% | 50.65% | 56.81% | 68.32% |
Thai language multiple choice exams, Test on unseen test set, Zero-shot learning. Benchmark source code and exams information: https://github.com/OpenThaiGPT/openthaigpt_eval
(Updated on: 7 April 2024)
## Benchmark on M3Exam evaluated by an external party (Float16.cloud)
| **Models** | **ENGLISH (M3EXAM)** | **THAI (M3EXAM)** |
|---------------------|------------------|---------------|
| OTG-7b | 40.92 % | 25.14 % |
| OTG-13b | 53.69 % | 36.49 % |
| <b style="color:blue">OTG-70b</b> | <b style="color:blue">72.58 %</b> | <b style="color:blue">48.29 %</b> |
| GPT-3.5-turbo-0613* | - | 34.1 % |
| GPT-4-0613* | - | 56.0 % |
More information: https://blog.float16.cloud/the-first-70b-thai-llm/
## Licenses
**Source Code**: License Apache Software License 2.0.<br>
**Weight**: Research and **Commercial uses**.<br>
## Sponsors
<img src="https://hf.fast360.xyz/production/uploads/5fcd9c426d942eaf4d1ebd30/FDC9WYN2iykQbVW1rY4q5.png" width="600px">
## Supports
- Official website: https://openthaigpt.aieat.or.th
- Facebook page: https://web.facebook.com/groups/openthaigpt
- A Discord server for discussion and support [here](https://discord.gg/rUTp6dfVUF)
- E-mail: [email protected]
## Prompt Format
Prompt format is based on Llama2 with a small modification (Adding "###" to specify the context part)
```
<s>[INST] <<SYS>
{system_prompt}
<</SYS>>
{human_turn1}###{context_turn1} [/INST]{assistant_turn1}</s><s>{human_turn2}###{context_turn2} [/INST] ...
```
### System prompt:
```
You are a question answering assistant. Answer the question as truthful and helpful as possible คุณคือผู้ช่วยตอบคำถาม จงตอบคำถามอย่างถูกต้องและมีประโยชน์ที่สุด
```
### Examples
#### Single Turn Conversation Example
```
<s>[INST] <<SYS>
You are a question answering assistant. Answer the question as truthful and helpful as possible คุณคือผู้ช่วยตอบคำถาม จงตอบคำถามอย่างถูกต้องและมีประโยชน์ที่สุด
<</SYS>>
สวัสดีครับ [/INST]
```
#### Single Turn Conversation with Context (RAG) Example
```
<s>[INST] <<SYS>
You are a question answering assistant. Answer the question as truthful and helpful as possible คุณคือผู้ช่วยตอบคำถาม จงตอบคำถามอย่างถูกต้องและมีประโยชน์ที่สุด
<</SYS>>
กรุงเทพมีพื้นที่เท่าไร่###กรุงเทพมหานคร เป็นเมืองหลวง นครและมหานครที่มีประชากรมากที่สุดของประเทศไทย กรุงเทพมหานครมีพื้นที่ทั้งหมด 1,568.737 ตร.กม. มีประชากรตามทะเบียนราษฎรกว่า 8 ล้านคน [/INST]
```
#### Multi Turn Conversation Example
##### First turn
```
<s>[INST] <<SYS>
You are a question answering assistant. Answer the question as truthful and helpful as possible คุณคือผู้ช่วยตอบคำถาม จงตอบคำถามอย่างถูกต้องและมีประโยชน์ที่สุด
<</SYS>>
สวัสดีครับ [/INST]
```
##### Second turn
```
<s>[INST] <<SYS>
You are a question answering assistant. Answer the question as truthful and helpful as possible คุณคือผู้ช่วยตอบคำถาม จงตอบคำถามอย่างถูกต้องและมีประโยชน์ที่สุด
<</SYS>>
สวัสดีครับ [/INST]สวัสดีค่ะ มีคำถามอะไร ถามได้เลย</s><s>ขอสูตรทำส้มตำหน่อย [/INST]
```
##### Third turn
```
<s>[INST] <<SYS>
You are a question answering assistant. Answer the question as truthful and helpful as possible คุณคือผู้ช่วยตอบคำถาม จงตอบคำถามอย่างถูกต้องและมีประโยชน์ที่สุด
<</SYS>>
สวัสดีครับ [/INST]สวัสดีค่ะ มีคำถามอะไร ถามได้เลย</s><s>ขอสูตรทำส้มตำหน่อย [/INST]ได้เลยค่ะ ส้มตำเป็นเมนูที่ทำง่ายและอร่อย มาเริ่มกันเลยนะคะ</s><s>เริ่มได้เลย [/INST]
```
##### Fourth turn
```
<s>[INST] <<SYS>
You are a question answering assistant. Answer the question as truthful and helpful as possible คุณคือผู้ช่วยตอบคำถาม จงตอบคำถามอย่างถูกต้องและมีประโยชน์ที่สุด
<</SYS>>
สวัสดีครับ [/INST]สวัสดีค่ะ มีคำถามอะไร ถามได้เลย</s><s>ขอสูตรทำส้มตำหน่อย [/INST]ได้เลยค่ะ ส้มตำเป็นเมนูที่ทำง่ายและอร่อย มาเริ่มกันเลยนะคะ</s><s>เริ่มได้เลย [/INST]
ตำไทยเป็นเมนูที่ใครๆ ก็คุ้นเคย ด้วยรสชาติที่กลมกล่อม หวาน เปรี้ยว เค็ม เผ็ด ครบรส วันนี้เราจะมาทำส้มตำด้วยกันค่ะ สิ่งแรกที่ต้องมีคือเครื่องปรุงดังนี้
- มะละกอดิบ 1 ลูก
- กุ้งแห้ง 1/2 ถ้วย
- กระเทียม 3 กลีบ
- พริกขี้หนู 3 เม็ด
- น้ำตาลปี๊บ 1 ช้อนโต๊ะ
- น้ำปลา 2 ช้อนโต๊ะ
- มะนาว 1 ลูก
- งาขาวคั่ว 1/4 ถ้วย
วิธีทำมีดังนี้ค่ะ
1. เริ่มจากล้างมะละกอให้สะอาด แล้วใช้มีดปอกเปลือก เอาไส้ออก หั่นเป็นเส้นบางๆ เตรียมไว้
2. ตำกุ้งแห้งให้ละเอียด แล้วตักขึ้นพักไว้
3. ใช้ครกหินหรือเครื่องปั่น บดพริกขี้หนูกับกระเทียมให้ละเอียด
4. ใส่กุ้งแห้งที่ตำแล้วลงไปผสม ตามด้วยน้ำตาลปี๊บ น้ำปลา มะนาว และเส้นมะละกอ คลุกเคล้าให้เข้ากัน
5. ชิมรสให้ได้รสหวาน เปรี้ยว เค็ม เผ็ด จากนั้นตักขึ้นเสิร์ฟพร้อมผักสด อาทิ กะหล่ำปลี ถั่วงอก แครอท ผักบุ้ง</s><s>ขอบคุณครับ [/INST]
```
#### Multi Turn Conversation with Context (RAG) Example
```
<s>[INST] <<SYS>
You are a question answering assistant. Answer the question as truthful and helpful as possible คุณคือผู้ช่วยตอบคำถาม จงตอบคำถามอย่างถูกต้องและมีประโยชน์ที่สุด
<</SYS>>
กรุงเทพมีพื้นที่เท่าไร่###กรุงเทพมหานคร เป็นเมืองหลวง นครและมหานครที่มีประชากรมากที่สุดของประเทศไทย กรุงเทพมหานครมีพื้นที่ทั้งหมด 1,568.737 ตร.กม. มีประชากรตามทะเบียนราษฎรกว่า 8 ล้านคน [/INST]
กรุงเทพมหานครมีพื้นที่ทั้งหมด 1,568.737 ตร.กม.</s><s>และประชากรล่ะ [/INST]
```
## How to use
### Huggingface
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Ensure CUDA is available
device = 'cuda' if torch.cuda.is_available() else 'cpu'
print(f"Using device: {device}")
# Init Model
model_path="openthaigpt/openthaigpt-1.0.0-7b-chat"
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, torch_dtype=torch.float16)
model.to(device)
# Prompt
prompt = "สวัสดีครับ OpenThaiGPT"
llama_prompt = f"<s>[INST] <<SYS>>\nYou are a question answering assistant. Answer the question as truthful and helpful as possible คุณคือผู้ช่วยตอบคำถาม จงตอบคำถามอย่างถูกต้องและมีประโยชน์ที่สุด<</SYS>>\n\n{prompt} [/INST]"
inputs = tokenizer.encode(llama_prompt, return_tensors="pt")
inputs = inputs.to(device)
# Generate
outputs = model.generate(inputs, max_length=512, num_return_sequences=1)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
### vLLM
1. Install VLLM (https://github.com/vllm-project/vllm)
2. Run server
```bash
python -m vllm.entrypoints.api_server --model /path/to/model --tensor-parallel-size num_gpus
```
3. Run inference (CURL example)
```bash
curl --request POST \
--url http://localhost:8000/generate \
--header "Content-Type: application/json" \
--data '{"prompt": "<s>[INST] <<SYS>>\nYou are a question answering assistant. Answer the question as truthful and helpful as possible คุณคือผู้ช่วยตอบคำถาม จงตอบคำถามอย่างถูกต้องและมีประโยชน์ที่สุด\n<</SYS>>\n\nอยากลดความอ้วนต้องทำอย่างไร [/INST]","use_beam_search": false, "temperature": 0.1, "max_tokens": 512, "top_p": 0.75, "top_k": 40, "frequency_penalty": 0.3 "stop": "</s>"}'
```
### LlamaCPP (for GGUF)
1. Build and Install LlamaCPP (LLAMA_CUBLAS=1 is for GPU inference)
```bash
git clone https://github.com/ggerganov/llama.cpp.git \
&& cd llama.cpp \
&& make -j LLAMA_CUBLAS=1 CUDA_DOCKER_ARCH=all
```
2. Run server
```bash
./server -m /path/to/ggml-model-f16.gguf -c 3072 -ngl 81 -ts 1,1 --host 0.0.0.0
```
3. Run inference (CURL example)
```bash
curl --location 'http://localhost:8000/completion' \
--header 'Content-Type: application/json' \
--data '{
"prompt":"<s>[INST] <<SYS>>\nYou are a question answering assistant. Answer the question as truthful and helpful as possible คุณคือผู้ช่วยตอบคำถาม จงตอบคำถามอย่างถูกต้องและมีประโยชน์ที่สุด friendly\n\n<<SYS>>\n\nอยากลดความอ้วนต้องทำอย่างไร [/INST]",
"max_tokens": 512,
"stop":"</s>"
}'
```
### GPU Memory Requirements
| **Number of Parameters** | **FP 16 bits** | **8 bits (Quantized)** | **4 bits (Quantized)** | **Example Graphic Card for 4 bits** |
|------------------|----------------|------------------------|------------------------|---------------------------------------------|
| **7b** | 24 GB | 12 GB | 6 GB | Nvidia RTX 4060 8GB |
| **13b** | 48 GB | 24 GB | 12 GB | Nvidia RTX 4070 16GB |
| **70b** | 192 GB | 96 GB | 48 GB | Nvidia RTX 4090 24GB x 2 cards |
### OpenThaiGPT Team
* Kobkrit Viriyayudhakorn ([email protected])
* Sumeth Yuenyong ([email protected])
* Thaweewat Rugsujarit ([email protected])
* Jillaphat Jaroenkantasima ([email protected])
* Norapat Buppodom ([email protected])
* Koravich Sangkaew ([email protected])
* Peerawat Rojratchadakorn ([email protected])
* Surapon Nonesung ([email protected])
* Chanon Utupon ([email protected])
* Sadhis Wongprayoon ([email protected])
* Nucharee Thongthungwong ([email protected])
* Chawakorn Phiantham ([email protected])
* Patteera Triamamornwooth ([email protected])
* Nattarika Juntarapaoraya ([email protected])
* Kriangkrai Saetan ([email protected])
* Pitikorn Khlaisamniang ([email protected])
### Citation
If OpenThaiGPT has been beneficial for your work, kindly consider citing it as follows:
#### Bibtex
```bibtex
@misc{yuenyong2024openthaigpt15thaicentricopen,
title={OpenThaiGPT 1.5: A Thai-Centric Open Source Large Language Model},
author={Sumeth Yuenyong and Kobkrit Viriyayudhakorn and Apivadee Piyatumrong and Jillaphat Jaroenkantasima},
year={2024},
eprint={2411.07238},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2411.07238},
}
```
#### APA Style (for TXT, MS Word)
```
Yuenyong, S., Viriyayudhakorn, K., Piyatumrong, A., & Jaroenkantasima, J. (2024). OpenThaiGPT 1.5: A Thai-Centric Open Source Large Language Model. arXiv [Cs.CL]. Retrieved from http://arxiv.org/abs/2411.07238
```
<i>Disclaimer: Provided responses are not guaranteed.</i>
|
{"language": ["th", "en"], "library_name": "transformers", "license": "llama2", "pipeline_tag": "text-generation", "tags": ["openthaigpt", "llama"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,259 |
gaudi/opus-mt-es-swc-ctranslate2
|
gaudi
|
translation
|
[
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-07-22T15:47:43Z |
2024-10-19T03:06:12+00:00
| 6 | 0 |
---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-es-swc)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-es-swc).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-es-swc --output_dir ./ctranslate2/opus-mt-es-swc-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-es-swc-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-es-swc-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-es-swc-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-es-swc) by Helsinki-NLP.
| null |
Non_BioNLP
|
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-es-swc)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-es-swc).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-es-swc --output_dir ./ctranslate2/opus-mt-es-swc-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-es-swc-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-es-swc-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-es-swc-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-es-swc) by Helsinki-NLP.
|
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
|
task
|
[
"TRANSLATION"
] | 42,260 |
RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf
|
RichardErkhov
| null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-08-13T00:42:15Z |
2024-08-13T11:27:54+00:00
| 26 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
calme-2.1-llama3.1-70b - GGUF
- Model creator: https://huggingface.co/MaziyarPanahi/
- Original model: https://huggingface.co/MaziyarPanahi/calme-2.1-llama3.1-70b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [calme-2.1-llama3.1-70b.Q2_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/blob/main/calme-2.1-llama3.1-70b.Q2_K.gguf) | Q2_K | 24.56GB |
| [calme-2.1-llama3.1-70b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/blob/main/calme-2.1-llama3.1-70b.IQ3_XS.gguf) | IQ3_XS | 27.29GB |
| [calme-2.1-llama3.1-70b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/blob/main/calme-2.1-llama3.1-70b.IQ3_S.gguf) | IQ3_S | 28.79GB |
| [calme-2.1-llama3.1-70b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/blob/main/calme-2.1-llama3.1-70b.Q3_K_S.gguf) | Q3_K_S | 28.79GB |
| [calme-2.1-llama3.1-70b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/blob/main/calme-2.1-llama3.1-70b.IQ3_M.gguf) | IQ3_M | 29.74GB |
| [calme-2.1-llama3.1-70b.Q3_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/blob/main/calme-2.1-llama3.1-70b.Q3_K.gguf) | Q3_K | 31.91GB |
| [calme-2.1-llama3.1-70b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/blob/main/calme-2.1-llama3.1-70b.Q3_K_M.gguf) | Q3_K_M | 31.91GB |
| [calme-2.1-llama3.1-70b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/blob/main/calme-2.1-llama3.1-70b.Q3_K_L.gguf) | Q3_K_L | 34.59GB |
| [calme-2.1-llama3.1-70b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/blob/main/calme-2.1-llama3.1-70b.IQ4_XS.gguf) | IQ4_XS | 35.64GB |
| [calme-2.1-llama3.1-70b.Q4_0.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/blob/main/calme-2.1-llama3.1-70b.Q4_0.gguf) | Q4_0 | 37.22GB |
| [calme-2.1-llama3.1-70b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/tree/main/) | IQ4_NL | 37.58GB |
| [calme-2.1-llama3.1-70b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/tree/main/) | Q4_K_S | 37.58GB |
| [calme-2.1-llama3.1-70b.Q4_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/tree/main/) | Q4_K | 39.6GB |
| [calme-2.1-llama3.1-70b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/tree/main/) | Q4_K_M | 39.6GB |
| [calme-2.1-llama3.1-70b.Q4_1.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/tree/main/) | Q4_1 | 41.27GB |
| [calme-2.1-llama3.1-70b.Q5_0.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/tree/main/) | Q5_0 | 45.32GB |
| [calme-2.1-llama3.1-70b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/tree/main/) | Q5_K_S | 45.32GB |
| [calme-2.1-llama3.1-70b.Q5_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/tree/main/) | Q5_K | 46.52GB |
| [calme-2.1-llama3.1-70b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/tree/main/) | Q5_K_M | 46.52GB |
| [calme-2.1-llama3.1-70b.Q5_1.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/tree/main/) | Q5_1 | 49.36GB |
| [calme-2.1-llama3.1-70b.Q6_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/tree/main/) | Q6_K | 53.91GB |
| [calme-2.1-llama3.1-70b.Q8_0.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/tree/main/) | Q8_0 | 69.83GB |
Original model description:
---
language:
- en
pipeline_tag: text-generation
tags:
- chat
- llama
- facebook
- llaam3
- finetune
- chatml
library_name: transformers
inference: false
model_creator: MaziyarPanahi
quantized_by: MaziyarPanahi
base_model: meta-llama/Meta-Llama-3.1-70B-Instruct
model_name: calme-2.1-llama3.1-70b
datasets:
- MaziyarPanahi/truthy-dpo-v0.1-axolotl
---
<img src="./calme-2.webp" alt="Calme-2 Models" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# MaziyarPanahi/calme-2.1-llama3.1-70b
This model is a fine-tuned version of the powerful `meta-llama/Meta-Llama-3.1-70B-Instruct`, pushing the boundaries of natural language understanding and generation even further. My goal was to create a versatile and robust model that excels across a wide range of benchmarks and real-world applications.
## Use Cases
This model is suitable for a wide range of applications, including but not limited to:
- Advanced question-answering systems
- Intelligent chatbots and virtual assistants
- Content generation and summarization
- Code generation and analysis
- Complex problem-solving and decision support
# ⚡ Quantized GGUF
All GGUF models are available here: [MaziyarPanahi/calme-2.1-llama3.1-70b-GGUF](https://huggingface.co/MaziyarPanahi/calme-2.1-llama3.1-70b-GGUF)
# 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
coming soon!
This model uses `ChatML` prompt template:
```
<|im_start|>system
{System}
<|im_end|>
<|im_start|>user
{User}
<|im_end|>
<|im_start|>assistant
{Assistant}
````
# How to use
```python
# Use a pipeline as a high-level helper
from transformers import pipeline
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe = pipeline("text-generation", model="MaziyarPanahi/calme-2.1-llama3.1-70b")
pipe(messages)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/calme-2.1-llama3.1-70b")
model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/calme-2.1-llama3.1-70b")
```
# Ethical Considerations
As with any large language model, users should be aware of potential biases and limitations. We recommend implementing appropriate safeguards and human oversight when deploying this model in production environments.
| null |
Non_BioNLP
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
calme-2.1-llama3.1-70b - GGUF
- Model creator: https://huggingface.co/MaziyarPanahi/
- Original model: https://huggingface.co/MaziyarPanahi/calme-2.1-llama3.1-70b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [calme-2.1-llama3.1-70b.Q2_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/blob/main/calme-2.1-llama3.1-70b.Q2_K.gguf) | Q2_K | 24.56GB |
| [calme-2.1-llama3.1-70b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/blob/main/calme-2.1-llama3.1-70b.IQ3_XS.gguf) | IQ3_XS | 27.29GB |
| [calme-2.1-llama3.1-70b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/blob/main/calme-2.1-llama3.1-70b.IQ3_S.gguf) | IQ3_S | 28.79GB |
| [calme-2.1-llama3.1-70b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/blob/main/calme-2.1-llama3.1-70b.Q3_K_S.gguf) | Q3_K_S | 28.79GB |
| [calme-2.1-llama3.1-70b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/blob/main/calme-2.1-llama3.1-70b.IQ3_M.gguf) | IQ3_M | 29.74GB |
| [calme-2.1-llama3.1-70b.Q3_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/blob/main/calme-2.1-llama3.1-70b.Q3_K.gguf) | Q3_K | 31.91GB |
| [calme-2.1-llama3.1-70b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/blob/main/calme-2.1-llama3.1-70b.Q3_K_M.gguf) | Q3_K_M | 31.91GB |
| [calme-2.1-llama3.1-70b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/blob/main/calme-2.1-llama3.1-70b.Q3_K_L.gguf) | Q3_K_L | 34.59GB |
| [calme-2.1-llama3.1-70b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/blob/main/calme-2.1-llama3.1-70b.IQ4_XS.gguf) | IQ4_XS | 35.64GB |
| [calme-2.1-llama3.1-70b.Q4_0.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/blob/main/calme-2.1-llama3.1-70b.Q4_0.gguf) | Q4_0 | 37.22GB |
| [calme-2.1-llama3.1-70b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/tree/main/) | IQ4_NL | 37.58GB |
| [calme-2.1-llama3.1-70b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/tree/main/) | Q4_K_S | 37.58GB |
| [calme-2.1-llama3.1-70b.Q4_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/tree/main/) | Q4_K | 39.6GB |
| [calme-2.1-llama3.1-70b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/tree/main/) | Q4_K_M | 39.6GB |
| [calme-2.1-llama3.1-70b.Q4_1.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/tree/main/) | Q4_1 | 41.27GB |
| [calme-2.1-llama3.1-70b.Q5_0.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/tree/main/) | Q5_0 | 45.32GB |
| [calme-2.1-llama3.1-70b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/tree/main/) | Q5_K_S | 45.32GB |
| [calme-2.1-llama3.1-70b.Q5_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/tree/main/) | Q5_K | 46.52GB |
| [calme-2.1-llama3.1-70b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/tree/main/) | Q5_K_M | 46.52GB |
| [calme-2.1-llama3.1-70b.Q5_1.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/tree/main/) | Q5_1 | 49.36GB |
| [calme-2.1-llama3.1-70b.Q6_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/tree/main/) | Q6_K | 53.91GB |
| [calme-2.1-llama3.1-70b.Q8_0.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.1-llama3.1-70b-gguf/tree/main/) | Q8_0 | 69.83GB |
Original model description:
---
language:
- en
pipeline_tag: text-generation
tags:
- chat
- llama
- facebook
- llaam3
- finetune
- chatml
library_name: transformers
inference: false
model_creator: MaziyarPanahi
quantized_by: MaziyarPanahi
base_model: meta-llama/Meta-Llama-3.1-70B-Instruct
model_name: calme-2.1-llama3.1-70b
datasets:
- MaziyarPanahi/truthy-dpo-v0.1-axolotl
---
<img src="./calme-2.webp" alt="Calme-2 Models" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# MaziyarPanahi/calme-2.1-llama3.1-70b
This model is a fine-tuned version of the powerful `meta-llama/Meta-Llama-3.1-70B-Instruct`, pushing the boundaries of natural language understanding and generation even further. My goal was to create a versatile and robust model that excels across a wide range of benchmarks and real-world applications.
## Use Cases
This model is suitable for a wide range of applications, including but not limited to:
- Advanced question-answering systems
- Intelligent chatbots and virtual assistants
- Content generation and summarization
- Code generation and analysis
- Complex problem-solving and decision support
# ⚡ Quantized GGUF
All GGUF models are available here: [MaziyarPanahi/calme-2.1-llama3.1-70b-GGUF](https://huggingface.co/MaziyarPanahi/calme-2.1-llama3.1-70b-GGUF)
# 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
coming soon!
This model uses `ChatML` prompt template:
```
<|im_start|>system
{System}
<|im_end|>
<|im_start|>user
{User}
<|im_end|>
<|im_start|>assistant
{Assistant}
````
# How to use
```python
# Use a pipeline as a high-level helper
from transformers import pipeline
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe = pipeline("text-generation", model="MaziyarPanahi/calme-2.1-llama3.1-70b")
pipe(messages)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/calme-2.1-llama3.1-70b")
model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/calme-2.1-llama3.1-70b")
```
# Ethical Considerations
As with any large language model, users should be aware of potential biases and limitations. We recommend implementing appropriate safeguards and human oversight when deploying this model in production environments.
|
{}
|
task
|
[
"SUMMARIZATION"
] | 42,261 |
kyujinpy/KoT-platypus2-7B
|
kyujinpy
|
text-generation
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"dataset:kyujinpy/KoCoT_2000",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-09-29T15:19:22Z |
2023-10-19T13:28:38+00:00
| 2,088 | 6 |
---
datasets:
- kyujinpy/KoCoT_2000
language:
- ko
library_name: transformers
license: cc-by-nc-sa-4.0
pipeline_tag: text-generation
---
**(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다**
**The license is `cc-by-nc-sa-4.0`.**
# **KoT-platypus2**

**CoT + KO-platypus2 = KoT-platypus2**
## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
KoT-platypus2-7B is an auto-regressive language model based on the LLaMA2 transformer architecture.
**Repo Link**
Github KoT-platypus: [KoT-platypus2](https://github.com/KyujinHan/KoT-platypus)
**Base Model**
[KO-Platypus2-7B-ex](https://huggingface.co/kyujinpy/KO-Platypus2-7B-ex)
More detail repo(Github): [CoT-llama2](https://github.com/Marker-Inc-Korea/CoT-llama2)
More detail repo(Github): [KO-Platypus2](https://github.com/Marker-Inc-Korea/KO-Platypus)
**Training Dataset**
I use [KoCoT_2000](https://huggingface.co/datasets/kyujinpy/KoCoT_2000).
Using DeepL, translate about [kaist-CoT](https://huggingface.co/datasets/kaist-ai/CoT-Collection).
I use A100 GPU 40GB and COLAB, when trianing.
**Training Hyperparameters**
| Hyperparameters | Value |
| --- | --- |
| batch_size | `64` |
| micro_batch_size | `1` |
| Epochs | `15` |
| learning_rate | `1e-5` |
| cutoff_len | `4096` |
| lr_scheduler | `linear` |
| base_model | `kyujinpy/KO-Platypus2-7B-ex` |
# **Model Benchmark**
## LM Eval Harness - Korean (polyglot branch)
- Used EleutherAI's [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot)
> Question Answering (QA)
### COPA (F1)
| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| [Polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 0.7196 | 0.7193 | 0.7204 | 0.7206 |
| [Polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 0.7595 | 0.7608 | 0.7638 | 0.7788 |
| [Polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 0.7745 | 0.7676 | 0.7775 | 0.7887 |
| [Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 0.7937 | 0.8108 | 0.8037 | 0.8369 |
| [Llama-2-Ko-7b 20B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.7388 | 0.7626 | 0.7808 | 0.7979 |
| [Llama-2-Ko-7b 40B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.7436 | 0.7927 | 0.8037 | 0.8259 |
| [KO-platypus2-7B-EX](https://huggingface.co/kyujinpy/KO-Platypus2-7B-ex) | 0.7509 | 0.7899 | 0.8029 | 0.8290 |
| **KoT-platypus2-7B(ours)** | 0.7517 | 0.7868 | 0.8009 | 0.8239 |
> Natural Language Inference (NLI; 자연어 추론 평가)
### HellaSwag (F1)
| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| [Polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 0.5247 | 0.5260 | 0.5278 | 0.5427 |
| [Polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 0.5707 | 0.5830 | 0.5670 | 0.5787 |
| [Polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 0.5976 | 0.5998 | 0.5979 | 0.6208 |
| [Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 0.5954 | 0.6306 | 0.6098 | 0.6118 |
| [Llama-2-Ko-7b 20B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.4518 | 0.4668 | 0.4726 | 0.4828 |
| [Llama-2-Ko-7b 40B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.4562 | 0.4657 | 0.4698 | 0.4774 |
| [KO-platypus2-7B-EX](https://huggingface.co/kyujinpy/KO-Platypus2-7B-ex) | 0.4571 | 0.4461 | 0.4371 | 0.4525 |
| **KoT-platypus2-7B(ours)** | 0.4432 | 0.4382 | 0.4550 | 0.4534 |
> Question Answering (QA)
### BoolQ (F1)
| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| [Polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 0.3552 | 0.4751 | 0.4109 | 0.4038 |
| [Polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 0.4320 | 0.5263 | 0.4930 | 0.4038 |
| [Polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 0.4356 | 0.5698 | 0.5187 | 0.5236 |
| [Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 0.4818 | 0.6041 | 0.6289 | 0.6448 |
| [Llama-2-Ko-7b 20B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.3607 | 0.6797 | 0.6801 | 0.6622 |
| [Llama-2-Ko-7b 40B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.5786 | 0.6977 | 0.7084 | 0.7144 |
| [KO-platypus2-7B-EX](https://huggingface.co/kyujinpy/KO-Platypus2-7B-ex) | 0.6028 | 0.6979 | 0.7016 | 0.6988 |
| **KoT-platypus2-7B(ours)** | 0.6142 | 0.6757 | 0.6839 | 0.6878 |
> Classification
### SentiNeg (F1)
| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| [Polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 0.6790 | 0.6257 | 0.5514 | 0.7851 |
| [Polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 0.4858 | 0.7950 | 0.7320 | 0.7851 |
| [Polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 0.3394 | 0.8841 | 0.8808 | 0.9521 |
| [Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 0.9117 | 0.9015 | 0.9345 | 0.9723 |
| [Llama-2-Ko-7b 20B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.4855 | 0.8295 | 0.8711 | 0.8513 |
| [Llama-2-Ko-7b 40B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.4594 | 0.7611 | 0.7276 | 0.9370 |
| [KO-platypus2-7B-EX](https://huggingface.co/kyujinpy/KO-Platypus2-7B-ex) | 0.5821 | 0.7653 | 0.7991 | 0.8643 |
| **KoT-platypus2-7B(ours)** | 0.6127 | 0.7199 | 0.7531 | 0.8381 |
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/KoT-platypus2-7B"
CoT-llama = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
CoT-llama_tokenizer = AutoTokenizer.from_pretrained(repo)
```
> Readme format: [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b)
---
| null |
TBD
|
**(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다**
**The license is `cc-by-nc-sa-4.0`.**
# **KoT-platypus2**

**CoT + KO-platypus2 = KoT-platypus2**
## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
KoT-platypus2-7B is an auto-regressive language model based on the LLaMA2 transformer architecture.
**Repo Link**
Github KoT-platypus: [KoT-platypus2](https://github.com/KyujinHan/KoT-platypus)
**Base Model**
[KO-Platypus2-7B-ex](https://huggingface.co/kyujinpy/KO-Platypus2-7B-ex)
More detail repo(Github): [CoT-llama2](https://github.com/Marker-Inc-Korea/CoT-llama2)
More detail repo(Github): [KO-Platypus2](https://github.com/Marker-Inc-Korea/KO-Platypus)
**Training Dataset**
I use [KoCoT_2000](https://huggingface.co/datasets/kyujinpy/KoCoT_2000).
Using DeepL, translate about [kaist-CoT](https://huggingface.co/datasets/kaist-ai/CoT-Collection).
I use A100 GPU 40GB and COLAB, when trianing.
**Training Hyperparameters**
| Hyperparameters | Value |
| --- | --- |
| batch_size | `64` |
| micro_batch_size | `1` |
| Epochs | `15` |
| learning_rate | `1e-5` |
| cutoff_len | `4096` |
| lr_scheduler | `linear` |
| base_model | `kyujinpy/KO-Platypus2-7B-ex` |
# **Model Benchmark**
## LM Eval Harness - Korean (polyglot branch)
- Used EleutherAI's [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot)
> Question Answering (QA)
### COPA (F1)
| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| [Polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 0.7196 | 0.7193 | 0.7204 | 0.7206 |
| [Polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 0.7595 | 0.7608 | 0.7638 | 0.7788 |
| [Polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 0.7745 | 0.7676 | 0.7775 | 0.7887 |
| [Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 0.7937 | 0.8108 | 0.8037 | 0.8369 |
| [Llama-2-Ko-7b 20B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.7388 | 0.7626 | 0.7808 | 0.7979 |
| [Llama-2-Ko-7b 40B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.7436 | 0.7927 | 0.8037 | 0.8259 |
| [KO-platypus2-7B-EX](https://huggingface.co/kyujinpy/KO-Platypus2-7B-ex) | 0.7509 | 0.7899 | 0.8029 | 0.8290 |
| **KoT-platypus2-7B(ours)** | 0.7517 | 0.7868 | 0.8009 | 0.8239 |
> Natural Language Inference (NLI; 자연어 추론 평가)
### HellaSwag (F1)
| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| [Polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 0.5247 | 0.5260 | 0.5278 | 0.5427 |
| [Polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 0.5707 | 0.5830 | 0.5670 | 0.5787 |
| [Polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 0.5976 | 0.5998 | 0.5979 | 0.6208 |
| [Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 0.5954 | 0.6306 | 0.6098 | 0.6118 |
| [Llama-2-Ko-7b 20B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.4518 | 0.4668 | 0.4726 | 0.4828 |
| [Llama-2-Ko-7b 40B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.4562 | 0.4657 | 0.4698 | 0.4774 |
| [KO-platypus2-7B-EX](https://huggingface.co/kyujinpy/KO-Platypus2-7B-ex) | 0.4571 | 0.4461 | 0.4371 | 0.4525 |
| **KoT-platypus2-7B(ours)** | 0.4432 | 0.4382 | 0.4550 | 0.4534 |
> Question Answering (QA)
### BoolQ (F1)
| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| [Polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 0.3552 | 0.4751 | 0.4109 | 0.4038 |
| [Polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 0.4320 | 0.5263 | 0.4930 | 0.4038 |
| [Polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 0.4356 | 0.5698 | 0.5187 | 0.5236 |
| [Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 0.4818 | 0.6041 | 0.6289 | 0.6448 |
| [Llama-2-Ko-7b 20B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.3607 | 0.6797 | 0.6801 | 0.6622 |
| [Llama-2-Ko-7b 40B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.5786 | 0.6977 | 0.7084 | 0.7144 |
| [KO-platypus2-7B-EX](https://huggingface.co/kyujinpy/KO-Platypus2-7B-ex) | 0.6028 | 0.6979 | 0.7016 | 0.6988 |
| **KoT-platypus2-7B(ours)** | 0.6142 | 0.6757 | 0.6839 | 0.6878 |
> Classification
### SentiNeg (F1)
| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| [Polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 0.6790 | 0.6257 | 0.5514 | 0.7851 |
| [Polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 0.4858 | 0.7950 | 0.7320 | 0.7851 |
| [Polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 0.3394 | 0.8841 | 0.8808 | 0.9521 |
| [Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 0.9117 | 0.9015 | 0.9345 | 0.9723 |
| [Llama-2-Ko-7b 20B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.4855 | 0.8295 | 0.8711 | 0.8513 |
| [Llama-2-Ko-7b 40B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.4594 | 0.7611 | 0.7276 | 0.9370 |
| [KO-platypus2-7B-EX](https://huggingface.co/kyujinpy/KO-Platypus2-7B-ex) | 0.5821 | 0.7653 | 0.7991 | 0.8643 |
| **KoT-platypus2-7B(ours)** | 0.6127 | 0.7199 | 0.7531 | 0.8381 |
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/KoT-platypus2-7B"
CoT-llama = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
CoT-llama_tokenizer = AutoTokenizer.from_pretrained(repo)
```
> Readme format: [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b)
---
|
{"datasets": ["kyujinpy/KoCoT_2000"], "language": ["ko"], "library_name": "transformers", "license": "cc-by-nc-sa-4.0", "pipeline_tag": "text-generation"}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,262 |
durgaphaniteja985/mt5-small-finetuned-amazon-en-es
|
durgaphaniteja985
|
summarization
|
[
"transformers",
"tf",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-12-03T21:07:09Z |
2024-12-04T04:27:58+00:00
| 33 | 0 |
---
base_model: google/mt5-small
library_name: transformers
license: apache-2.0
metrics:
- rouge
tags:
- summarization
- generated_from_trainer
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2159
- Rouge1: 17.3504
- Rouge2: 8.219
- Rougel: 16.7598
- Rougelsum: 16.9027
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 5.8554 | 1.0 | 1541 | 3.4109 | 14.0843 | 7.0483 | 13.7876 | 13.7424 |
| 3.5936 | 2.0 | 3082 | 3.3110 | 16.9351 | 8.9452 | 16.1951 | 16.2707 |
| 3.2943 | 3.0 | 4623 | 3.2440 | 18.9749 | 10.2572 | 18.3207 | 18.3447 |
| 3.1378 | 4.0 | 6164 | 3.2157 | 17.5435 | 9.5501 | 16.976 | 16.9846 |
| 3.0374 | 5.0 | 7705 | 3.2017 | 17.2499 | 8.6003 | 16.8574 | 16.8485 |
| 2.9644 | 6.0 | 9246 | 3.1948 | 16.7856 | 7.7093 | 16.3617 | 16.4731 |
| 2.921 | 7.0 | 10787 | 3.2160 | 18.1708 | 8.8001 | 17.4812 | 17.5949 |
| 2.8907 | 8.0 | 12328 | 3.2159 | 17.3504 | 8.219 | 16.7598 | 16.9027 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2159
- Rouge1: 17.3504
- Rouge2: 8.219
- Rougel: 16.7598
- Rougelsum: 16.9027
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 5.8554 | 1.0 | 1541 | 3.4109 | 14.0843 | 7.0483 | 13.7876 | 13.7424 |
| 3.5936 | 2.0 | 3082 | 3.3110 | 16.9351 | 8.9452 | 16.1951 | 16.2707 |
| 3.2943 | 3.0 | 4623 | 3.2440 | 18.9749 | 10.2572 | 18.3207 | 18.3447 |
| 3.1378 | 4.0 | 6164 | 3.2157 | 17.5435 | 9.5501 | 16.976 | 16.9846 |
| 3.0374 | 5.0 | 7705 | 3.2017 | 17.2499 | 8.6003 | 16.8574 | 16.8485 |
| 2.9644 | 6.0 | 9246 | 3.1948 | 16.7856 | 7.7093 | 16.3617 | 16.4731 |
| 2.921 | 7.0 | 10787 | 3.2160 | 18.1708 | 8.8001 | 17.4812 | 17.5949 |
| 2.8907 | 8.0 | 12328 | 3.2159 | 17.3504 | 8.219 | 16.7598 | 16.9027 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
{"base_model": "google/mt5-small", "library_name": "transformers", "license": "apache-2.0", "metrics": ["rouge"], "tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "mt5-small-finetuned-amazon-en-es", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 42,263 |
A-Bar/BioMedNLP_DeBERTa_all_updates
|
A-Bar
|
text-classification
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:sem_eval_2024_task_2",
"base_model:hongpingjun98/BioMedNLP_DeBERTa",
"base_model:finetune:hongpingjun98/BioMedNLP_DeBERTa",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-01-23T17:13:36Z |
2024-01-23T23:03:32+00:00
| 5 | 0 |
---
base_model: hongpingjun98/BioMedNLP_DeBERTa
datasets:
- sem_eval_2024_task_2
license: mit
metrics:
- accuracy
- precision
- recall
- f1
tags:
- generated_from_trainer
model-index:
- name: BioMedNLP_DeBERTa_all_updates
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: sem_eval_2024_task_2
type: sem_eval_2024_task_2
config: sem_eval_2024_task_2_source
split: validation
args: sem_eval_2024_task_2_source
metrics:
- type: accuracy
value: 0.705
name: Accuracy
- type: precision
value: 0.7238235615241838
name: Precision
- type: recall
value: 0.7050000000000001
name: Recall
- type: f1
value: 0.6986644194182692
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BioMedNLP_DeBERTa_all_updates
This model is a fine-tuned version of [hongpingjun98/BioMedNLP_DeBERTa](https://huggingface.co/hongpingjun98/BioMedNLP_DeBERTa) on the sem_eval_2024_task_2 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1863
- Accuracy: 0.705
- Precision: 0.7238
- Recall: 0.7050
- F1: 0.6987
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.4238 | 1.0 | 116 | 0.6639 | 0.665 | 0.6678 | 0.665 | 0.6636 |
| 0.4316 | 2.0 | 232 | 0.6644 | 0.68 | 0.6875 | 0.6800 | 0.6768 |
| 0.3819 | 3.0 | 348 | 0.7328 | 0.71 | 0.7188 | 0.71 | 0.7071 |
| 0.3243 | 4.0 | 464 | 0.9162 | 0.7 | 0.7083 | 0.7 | 0.6970 |
| 0.4053 | 5.0 | 580 | 0.7145 | 0.715 | 0.7214 | 0.7150 | 0.7129 |
| 0.2548 | 6.0 | 696 | 1.0598 | 0.69 | 0.7016 | 0.69 | 0.6855 |
| 0.3455 | 7.0 | 812 | 0.7782 | 0.72 | 0.7232 | 0.72 | 0.7190 |
| 0.2177 | 8.0 | 928 | 1.1182 | 0.69 | 0.6950 | 0.69 | 0.6880 |
| 0.2304 | 9.0 | 1044 | 1.4332 | 0.695 | 0.708 | 0.695 | 0.6902 |
| 0.2103 | 10.0 | 1160 | 1.2736 | 0.7 | 0.7198 | 0.7 | 0.6931 |
| 0.1748 | 11.0 | 1276 | 1.2654 | 0.675 | 0.6816 | 0.675 | 0.6720 |
| 0.1608 | 12.0 | 1392 | 1.8885 | 0.63 | 0.6689 | 0.63 | 0.6074 |
| 0.1082 | 13.0 | 1508 | 1.7004 | 0.68 | 0.7005 | 0.6800 | 0.6716 |
| 0.1074 | 14.0 | 1624 | 1.8145 | 0.67 | 0.6804 | 0.67 | 0.6652 |
| 0.0238 | 15.0 | 1740 | 1.7608 | 0.68 | 0.6931 | 0.68 | 0.6745 |
| 0.038 | 16.0 | 1856 | 1.9937 | 0.67 | 0.6953 | 0.6700 | 0.6589 |
| 0.0365 | 17.0 | 1972 | 2.1871 | 0.675 | 0.6964 | 0.675 | 0.6659 |
| 0.0144 | 18.0 | 2088 | 2.1093 | 0.695 | 0.7059 | 0.6950 | 0.6909 |
| 0.0014 | 19.0 | 2204 | 2.1559 | 0.695 | 0.7103 | 0.6950 | 0.6893 |
| 0.0324 | 20.0 | 2320 | 2.1863 | 0.705 | 0.7238 | 0.7050 | 0.6987 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
| null |
BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BioMedNLP_DeBERTa_all_updates
This model is a fine-tuned version of [hongpingjun98/BioMedNLP_DeBERTa](https://huggingface.co/hongpingjun98/BioMedNLP_DeBERTa) on the sem_eval_2024_task_2 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1863
- Accuracy: 0.705
- Precision: 0.7238
- Recall: 0.7050
- F1: 0.6987
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.4238 | 1.0 | 116 | 0.6639 | 0.665 | 0.6678 | 0.665 | 0.6636 |
| 0.4316 | 2.0 | 232 | 0.6644 | 0.68 | 0.6875 | 0.6800 | 0.6768 |
| 0.3819 | 3.0 | 348 | 0.7328 | 0.71 | 0.7188 | 0.71 | 0.7071 |
| 0.3243 | 4.0 | 464 | 0.9162 | 0.7 | 0.7083 | 0.7 | 0.6970 |
| 0.4053 | 5.0 | 580 | 0.7145 | 0.715 | 0.7214 | 0.7150 | 0.7129 |
| 0.2548 | 6.0 | 696 | 1.0598 | 0.69 | 0.7016 | 0.69 | 0.6855 |
| 0.3455 | 7.0 | 812 | 0.7782 | 0.72 | 0.7232 | 0.72 | 0.7190 |
| 0.2177 | 8.0 | 928 | 1.1182 | 0.69 | 0.6950 | 0.69 | 0.6880 |
| 0.2304 | 9.0 | 1044 | 1.4332 | 0.695 | 0.708 | 0.695 | 0.6902 |
| 0.2103 | 10.0 | 1160 | 1.2736 | 0.7 | 0.7198 | 0.7 | 0.6931 |
| 0.1748 | 11.0 | 1276 | 1.2654 | 0.675 | 0.6816 | 0.675 | 0.6720 |
| 0.1608 | 12.0 | 1392 | 1.8885 | 0.63 | 0.6689 | 0.63 | 0.6074 |
| 0.1082 | 13.0 | 1508 | 1.7004 | 0.68 | 0.7005 | 0.6800 | 0.6716 |
| 0.1074 | 14.0 | 1624 | 1.8145 | 0.67 | 0.6804 | 0.67 | 0.6652 |
| 0.0238 | 15.0 | 1740 | 1.7608 | 0.68 | 0.6931 | 0.68 | 0.6745 |
| 0.038 | 16.0 | 1856 | 1.9937 | 0.67 | 0.6953 | 0.6700 | 0.6589 |
| 0.0365 | 17.0 | 1972 | 2.1871 | 0.675 | 0.6964 | 0.675 | 0.6659 |
| 0.0144 | 18.0 | 2088 | 2.1093 | 0.695 | 0.7059 | 0.6950 | 0.6909 |
| 0.0014 | 19.0 | 2204 | 2.1559 | 0.695 | 0.7103 | 0.6950 | 0.6893 |
| 0.0324 | 20.0 | 2320 | 2.1863 | 0.705 | 0.7238 | 0.7050 | 0.6987 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
{"base_model": "hongpingjun98/BioMedNLP_DeBERTa", "datasets": ["sem_eval_2024_task_2"], "license": "mit", "metrics": ["accuracy", "precision", "recall", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "BioMedNLP_DeBERTa_all_updates", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "sem_eval_2024_task_2", "type": "sem_eval_2024_task_2", "config": "sem_eval_2024_task_2_source", "split": "validation", "args": "sem_eval_2024_task_2_source"}, "metrics": [{"type": "accuracy", "value": 0.705, "name": "Accuracy"}, {"type": "precision", "value": 0.7238235615241838, "name": "Precision"}, {"type": "recall", "value": 0.7050000000000001, "name": "Recall"}, {"type": "f1", "value": 0.6986644194182692, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,264 |
THU-KEG/kopl_semantic_parser
|
THU-KEG
|
text2text-generation
|
[
"pytorch",
"bart",
"semantic parsing",
"KoPL",
"text2text-generation",
"en",
"region:us"
] | 2024-08-07T06:20:29Z |
2024-08-07T07:29:53+00:00
| 5 | 1 |
---
language:
- en
pipeline_tag: text2text-generation
tags:
- semantic parsing
- bart
- KoPL
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model fine-tuned from Bart-base model for semantic parsing task which converts natural language question into logic forms named KoPL program. The model is fine-tuned on [KQA Pro dataset](https://aclanthology.org/2022.acl-long.422/).
## Model Details
### Model Description
- **Model type:** Semantic parsing model
- **Language(s) (NLP):** English
- **Finetuned from model:** Bart-base
## How to Get Started with the Model
Refer code below to get started with the model.
[Github Link](https://github.com/THU-KEG/DiaKoP/blob/main/backend-src/semantic_parser.py)
## Citation
**BibTeX:**
```
@inproceedings{cao-etal-2022-kqa,
title = "{KQA} Pro: A Dataset with Explicit Compositional Programs for Complex Question Answering over Knowledge Base",
author = "Cao, Shulin and
Shi, Jiaxin and
Pan, Liangming and
Nie, Lunyiu and
Xiang, Yutong and
Hou, Lei and
Li, Juanzi and
He, Bin and
Zhang, Hanwang",
editor = "Muresan, Smaranda and
Nakov, Preslav and
Villavicencio, Aline",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.422",
doi = "10.18653/v1/2022.acl-long.422",
pages = "6101--6119",
abstract = "Complex question answering over knowledge base (Complex KBQA) is challenging because it requires various compositional reasoning capabilities, such as multi-hop inference, attribute comparison, set operation, etc. Existing benchmarks have some shortcomings that limit the development of Complex KBQA: 1) they only provide QA pairs without explicit reasoning processes; 2) questions are poor in diversity or scale. To this end, we introduce KQA Pro, a dataset for Complex KBQA including around 120K diverse natural language questions. We introduce a compositional and interpretable programming language KoPL to represent the reasoning process of complex questions. For each question, we provide the corresponding KoPL program and SPARQL query, so that KQA Pro can serve for both KBQA and semantic parsing tasks. Experimental results show that state-of-the-art KBQA methods cannot achieve promising results on KQA Pro as on current datasets, which suggests that KQA Pro is challenging and Complex KBQA requires further research efforts. We also treat KQA Pro as a diagnostic dataset for testing multiple reasoning skills, conduct a thorough evaluation of existing models and discuss further directions for Complex KBQA. Our codes and datasets can be obtained from \url{https://github.com/shijx12/KQAPro_Baselines}.",
}
```
| null |
Non_BioNLP
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model fine-tuned from Bart-base model for semantic parsing task which converts natural language question into logic forms named KoPL program. The model is fine-tuned on [KQA Pro dataset](https://aclanthology.org/2022.acl-long.422/).
## Model Details
### Model Description
- **Model type:** Semantic parsing model
- **Language(s) (NLP):** English
- **Finetuned from model:** Bart-base
## How to Get Started with the Model
Refer code below to get started with the model.
[Github Link](https://github.com/THU-KEG/DiaKoP/blob/main/backend-src/semantic_parser.py)
## Citation
**BibTeX:**
```
@inproceedings{cao-etal-2022-kqa,
title = "{KQA} Pro: A Dataset with Explicit Compositional Programs for Complex Question Answering over Knowledge Base",
author = "Cao, Shulin and
Shi, Jiaxin and
Pan, Liangming and
Nie, Lunyiu and
Xiang, Yutong and
Hou, Lei and
Li, Juanzi and
He, Bin and
Zhang, Hanwang",
editor = "Muresan, Smaranda and
Nakov, Preslav and
Villavicencio, Aline",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.422",
doi = "10.18653/v1/2022.acl-long.422",
pages = "6101--6119",
abstract = "Complex question answering over knowledge base (Complex KBQA) is challenging because it requires various compositional reasoning capabilities, such as multi-hop inference, attribute comparison, set operation, etc. Existing benchmarks have some shortcomings that limit the development of Complex KBQA: 1) they only provide QA pairs without explicit reasoning processes; 2) questions are poor in diversity or scale. To this end, we introduce KQA Pro, a dataset for Complex KBQA including around 120K diverse natural language questions. We introduce a compositional and interpretable programming language KoPL to represent the reasoning process of complex questions. For each question, we provide the corresponding KoPL program and SPARQL query, so that KQA Pro can serve for both KBQA and semantic parsing tasks. Experimental results show that state-of-the-art KBQA methods cannot achieve promising results on KQA Pro as on current datasets, which suggests that KQA Pro is challenging and Complex KBQA requires further research efforts. We also treat KQA Pro as a diagnostic dataset for testing multiple reasoning skills, conduct a thorough evaluation of existing models and discuss further directions for Complex KBQA. Our codes and datasets can be obtained from \url{https://github.com/shijx12/KQAPro_Baselines}.",
}
```
|
{"language": ["en"], "pipeline_tag": "text2text-generation", "tags": ["semantic parsing", "bart", "KoPL"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,265 |
tensorblock/airoboros-dpo-70b-3.3-GGUF
|
tensorblock
| null |
[
"gguf",
"llama-3",
"TensorBlock",
"GGUF",
"dataset:jondurbin/airoboros-3.2",
"dataset:bluemoon-fandom-1-1-rp-cleaned",
"dataset:boolq",
"dataset:LDJnr/Capybara",
"dataset:jondurbin/cinematika-v0.1",
"dataset:glaiveai/glaive-function-calling-v2",
"dataset:grimulkan/LimaRP-augmented",
"dataset:piqa",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:mattpscott/airoboros-summarization",
"dataset:unalignment/toxic-dpo-v0.2",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"dataset:jondurbin/contextual-dpo-v0.1",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:jondurbin/py-dpo-v0.1",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:lmsys/lmsys-chat-1m",
"base_model:jondurbin/airoboros-dpo-70b-3.3",
"base_model:quantized:jondurbin/airoboros-dpo-70b-3.3",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-12-10T05:33:21Z |
2024-12-10T11:26:09+00:00
| 188 | 0 |
---
base_model: jondurbin/airoboros-dpo-70b-3.3
datasets:
- jondurbin/airoboros-3.2
- bluemoon-fandom-1-1-rp-cleaned
- boolq
- LDJnr/Capybara
- jondurbin/cinematika-v0.1
- glaiveai/glaive-function-calling-v2
- grimulkan/LimaRP-augmented
- piqa
- Vezora/Tested-22k-Python-Alpaca
- mattpscott/airoboros-summarization
- unalignment/toxic-dpo-v0.2
- allenai/ultrafeedback_binarized_cleaned
- argilla/distilabel-intel-orca-dpo-pairs
- jondurbin/airoboros-3.2
- jondurbin/contextual-dpo-v0.1
- jondurbin/gutenberg-dpo-v0.1
- jondurbin/py-dpo-v0.1
- jondurbin/truthy-dpo-v0.1
- jondurbin/gutenberg-dpo-v0.1
- lmsys/lmsys-chat-1m
license: other
license_name: llama3
license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE
tags:
- llama-3
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## jondurbin/airoboros-dpo-70b-3.3 - GGUF
This repo contains GGUF format model files for [jondurbin/airoboros-dpo-70b-3.3](https://huggingface.co/jondurbin/airoboros-dpo-70b-3.3).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [airoboros-dpo-70b-3.3-Q2_K.gguf](https://huggingface.co/tensorblock/airoboros-dpo-70b-3.3-GGUF/blob/main/airoboros-dpo-70b-3.3-Q2_K.gguf) | Q2_K | 26.375 GB | smallest, significant quality loss - not recommended for most purposes |
| [airoboros-dpo-70b-3.3-Q3_K_S.gguf](https://huggingface.co/tensorblock/airoboros-dpo-70b-3.3-GGUF/blob/main/airoboros-dpo-70b-3.3-Q3_K_S.gguf) | Q3_K_S | 30.912 GB | very small, high quality loss |
| [airoboros-dpo-70b-3.3-Q3_K_M.gguf](https://huggingface.co/tensorblock/airoboros-dpo-70b-3.3-GGUF/blob/main/airoboros-dpo-70b-3.3-Q3_K_M.gguf) | Q3_K_M | 34.267 GB | very small, high quality loss |
| [airoboros-dpo-70b-3.3-Q3_K_L.gguf](https://huggingface.co/tensorblock/airoboros-dpo-70b-3.3-GGUF/blob/main/airoboros-dpo-70b-3.3-Q3_K_L.gguf) | Q3_K_L | 37.141 GB | small, substantial quality loss |
| [airoboros-dpo-70b-3.3-Q4_0.gguf](https://huggingface.co/tensorblock/airoboros-dpo-70b-3.3-GGUF/blob/main/airoboros-dpo-70b-3.3-Q4_0.gguf) | Q4_0 | 39.970 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [airoboros-dpo-70b-3.3-Q4_K_S.gguf](https://huggingface.co/tensorblock/airoboros-dpo-70b-3.3-GGUF/blob/main/airoboros-dpo-70b-3.3-Q4_K_S.gguf) | Q4_K_S | 40.347 GB | small, greater quality loss |
| [airoboros-dpo-70b-3.3-Q4_K_M.gguf](https://huggingface.co/tensorblock/airoboros-dpo-70b-3.3-GGUF/blob/main/airoboros-dpo-70b-3.3-Q4_K_M.gguf) | Q4_K_M | 42.520 GB | medium, balanced quality - recommended |
| [airoboros-dpo-70b-3.3-Q5_0.gguf](https://huggingface.co/tensorblock/airoboros-dpo-70b-3.3-GGUF/blob/main/airoboros-dpo-70b-3.3-Q5_0.gguf) | Q5_0 | 48.657 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [airoboros-dpo-70b-3.3-Q5_K_S.gguf](https://huggingface.co/tensorblock/airoboros-dpo-70b-3.3-GGUF/blob/main/airoboros-dpo-70b-3.3-Q5_K_S.gguf) | Q5_K_S | 48.657 GB | large, low quality loss - recommended |
| [airoboros-dpo-70b-3.3-Q5_K_M.gguf](https://huggingface.co/tensorblock/airoboros-dpo-70b-3.3-GGUF/blob/main/airoboros-dpo-70b-3.3-Q5_K_M.gguf) | Q5_K_M | 49.950 GB | large, very low quality loss - recommended |
| [airoboros-dpo-70b-3.3-Q6_K](https://huggingface.co/tensorblock/airoboros-dpo-70b-3.3-GGUF/blob/main/airoboros-dpo-70b-3.3-Q6_K) | Q6_K | 57.888 GB | very large, extremely low quality loss |
| [airoboros-dpo-70b-3.3-Q8_0](https://huggingface.co/tensorblock/airoboros-dpo-70b-3.3-GGUF/blob/main/airoboros-dpo-70b-3.3-Q8_0) | Q8_0 | 74.975 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/airoboros-dpo-70b-3.3-GGUF --include "airoboros-dpo-70b-3.3-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/airoboros-dpo-70b-3.3-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
| null |
Non_BioNLP
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## jondurbin/airoboros-dpo-70b-3.3 - GGUF
This repo contains GGUF format model files for [jondurbin/airoboros-dpo-70b-3.3](https://huggingface.co/jondurbin/airoboros-dpo-70b-3.3).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [airoboros-dpo-70b-3.3-Q2_K.gguf](https://huggingface.co/tensorblock/airoboros-dpo-70b-3.3-GGUF/blob/main/airoboros-dpo-70b-3.3-Q2_K.gguf) | Q2_K | 26.375 GB | smallest, significant quality loss - not recommended for most purposes |
| [airoboros-dpo-70b-3.3-Q3_K_S.gguf](https://huggingface.co/tensorblock/airoboros-dpo-70b-3.3-GGUF/blob/main/airoboros-dpo-70b-3.3-Q3_K_S.gguf) | Q3_K_S | 30.912 GB | very small, high quality loss |
| [airoboros-dpo-70b-3.3-Q3_K_M.gguf](https://huggingface.co/tensorblock/airoboros-dpo-70b-3.3-GGUF/blob/main/airoboros-dpo-70b-3.3-Q3_K_M.gguf) | Q3_K_M | 34.267 GB | very small, high quality loss |
| [airoboros-dpo-70b-3.3-Q3_K_L.gguf](https://huggingface.co/tensorblock/airoboros-dpo-70b-3.3-GGUF/blob/main/airoboros-dpo-70b-3.3-Q3_K_L.gguf) | Q3_K_L | 37.141 GB | small, substantial quality loss |
| [airoboros-dpo-70b-3.3-Q4_0.gguf](https://huggingface.co/tensorblock/airoboros-dpo-70b-3.3-GGUF/blob/main/airoboros-dpo-70b-3.3-Q4_0.gguf) | Q4_0 | 39.970 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [airoboros-dpo-70b-3.3-Q4_K_S.gguf](https://huggingface.co/tensorblock/airoboros-dpo-70b-3.3-GGUF/blob/main/airoboros-dpo-70b-3.3-Q4_K_S.gguf) | Q4_K_S | 40.347 GB | small, greater quality loss |
| [airoboros-dpo-70b-3.3-Q4_K_M.gguf](https://huggingface.co/tensorblock/airoboros-dpo-70b-3.3-GGUF/blob/main/airoboros-dpo-70b-3.3-Q4_K_M.gguf) | Q4_K_M | 42.520 GB | medium, balanced quality - recommended |
| [airoboros-dpo-70b-3.3-Q5_0.gguf](https://huggingface.co/tensorblock/airoboros-dpo-70b-3.3-GGUF/blob/main/airoboros-dpo-70b-3.3-Q5_0.gguf) | Q5_0 | 48.657 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [airoboros-dpo-70b-3.3-Q5_K_S.gguf](https://huggingface.co/tensorblock/airoboros-dpo-70b-3.3-GGUF/blob/main/airoboros-dpo-70b-3.3-Q5_K_S.gguf) | Q5_K_S | 48.657 GB | large, low quality loss - recommended |
| [airoboros-dpo-70b-3.3-Q5_K_M.gguf](https://huggingface.co/tensorblock/airoboros-dpo-70b-3.3-GGUF/blob/main/airoboros-dpo-70b-3.3-Q5_K_M.gguf) | Q5_K_M | 49.950 GB | large, very low quality loss - recommended |
| [airoboros-dpo-70b-3.3-Q6_K](https://huggingface.co/tensorblock/airoboros-dpo-70b-3.3-GGUF/blob/main/airoboros-dpo-70b-3.3-Q6_K) | Q6_K | 57.888 GB | very large, extremely low quality loss |
| [airoboros-dpo-70b-3.3-Q8_0](https://huggingface.co/tensorblock/airoboros-dpo-70b-3.3-GGUF/blob/main/airoboros-dpo-70b-3.3-Q8_0) | Q8_0 | 74.975 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/airoboros-dpo-70b-3.3-GGUF --include "airoboros-dpo-70b-3.3-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/airoboros-dpo-70b-3.3-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
{"base_model": "jondurbin/airoboros-dpo-70b-3.3", "datasets": ["jondurbin/airoboros-3.2", "bluemoon-fandom-1-1-rp-cleaned", "boolq", "LDJnr/Capybara", "jondurbin/cinematika-v0.1", "glaiveai/glaive-function-calling-v2", "grimulkan/LimaRP-augmented", "piqa", "Vezora/Tested-22k-Python-Alpaca", "mattpscott/airoboros-summarization", "unalignment/toxic-dpo-v0.2", "allenai/ultrafeedback_binarized_cleaned", "argilla/distilabel-intel-orca-dpo-pairs", "jondurbin/airoboros-3.2", "jondurbin/contextual-dpo-v0.1", "jondurbin/gutenberg-dpo-v0.1", "jondurbin/py-dpo-v0.1", "jondurbin/truthy-dpo-v0.1", "jondurbin/gutenberg-dpo-v0.1", "lmsys/lmsys-chat-1m"], "license": "other", "license_name": "llama3", "license_link": "https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE", "tags": ["llama-3", "TensorBlock", "GGUF"]}
|
task
|
[
"SUMMARIZATION"
] | 42,266 |
DeepMostInnovations/hindi-embedding-foundational-model
|
DeepMostInnovations
|
sentence-similarity
|
[
"transformers",
"safetensors",
"hindi",
"embeddings",
"sentence-embeddings",
"semantic-search",
"text-similarity",
"sentence-similarity",
"hi",
"dataset:custom",
"license:mit",
"endpoints_compatible",
"region:us"
] | 2025-03-10T10:59:11Z |
2025-03-13T14:59:25+00:00
| 8 | 0 |
---
datasets:
- custom
language: hi
library_name: transformers
license: mit
pipeline_tag: sentence-similarity
tags:
- hindi
- embeddings
- sentence-embeddings
- semantic-search
- text-similarity
---
# Hindi Sentence Embeddings Model
This is a custom state-of-the-art sentence embedding model trained specifically for Hindi text. It leverages an advanced transformer architecture with specialized pooling strategies to create high-quality semantic representations of Hindi sentences.
## Features
- Specialized for Hindi language text
- Advanced transformer architecture with optimized attention mechanism
- Multiple pooling strategies for enhanced semantic representations
- Creates normalized vector representations for semantic similarity
- Supports semantic search and text similarity applications
## Usage
### Installation
```bash
pip install torch sentencepiece scikit-learn matplotlib
git lfs install
git clone https://huggingface.co/DeepMostInnovations/hindi-embedding-foundational-model
cd hindi-embedding-foundational-model
```
### Enhanced RAG System
This model now includes an enhanced RAG (Retrieval Augmented Generation) system that integrates Unsloth's optimized Llama-3.2-1B-Instruct model for question answering on top of Hindi document retrieval.
#### Setup and Installation
1. Install additional dependencies:
```bash
pip install unsloth transformers bitsandbytes accelerate langchain langchain-community faiss-cpu
```
2. Index your documents:
```bash
python hindi-rag-system.py --model_dir /path/to/your/model --tokenizer_dir /path/to/tokenizer --data_dir ./data --output_dir ./output --index
```
3. Run in QA mode with LLM:
```bash
python hindi-rag-system.py --model_dir /path/to/your/model --tokenizer_dir /path/to/tokenizer --output_dir ./output --interactive --qa
```
### Basic Embedding Usage
```python
from hindi_embeddings import HindiEmbedder
# Initialize the embedder
model = HindiEmbedder("path/to/hindi-embedding-foundational-model")
# Encode sentences to embeddings
sentences = [
"मुझे हिंदी भाषा बहुत पसंद है।",
"मैं हिंदी भाषा सीख रहा हूँ।"
]
embeddings = model.encode(sentences)
print(f"Embedding shape: {embeddings.shape}")
# Compute similarity between sentences
similarity = model.compute_similarity(sentences[0], sentences[1])
print(f"Similarity: {similarity:.4f}")
# Perform semantic search
query = "भारत की राजधानी"
documents = [
"दिल्ली भारत की राजधानी है।",
"मुंबई भारत का सबसे बड़ा शहर है।",
"हिमालय पर्वत भारत के उत्तर में स्थित है।"
]
results = model.search(query, documents)
for i, result in enumerate(results):
print(f"{i+1}. Score: {result['score']:.4f}")
print(f" Document: {result['document']}")
# Visualize embeddings
example_sentences = [
"मुझे हिंदी में पढ़ना बहुत पसंद है।",
"आज मौसम बहुत अच्छा है।",
"भारत एक विशाल देश है।"
]
model.visualize_embeddings(example_sentences)
```
## Model Details
This model uses an advanced transformer-based architecture with the following enhancements:
- Pre-layer normalization for stable training
- Specialized attention mechanism with relative positional encoding
- Multiple pooling strategies (weighted, mean, attention-based)
- L2-normalized vectors for cosine similarity
Technical specifications:
- Embedding dimension: 768
- Hidden dimension: 768
- Layers: 12
- Attention heads: 12
- Vocabulary size: 50,000
- Context length: 128 tokens
## Applications
- Semantic search and information retrieval
- Text clustering and categorization
- Recommendation systems
- Question answering
- Document similarity comparison
- Content-based filtering
- RAG systems for Hindi language content
## License
This model is released under the MIT License.
## Citation
If you use this model in your research or application, please cite us:
```
@misc{DeepMostInnovations2025hindi,
author = {DeepMost Innovations},
title = {Hindi Sentence Embeddings Model},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/DeepMostInnovations/hindi-embedding-foundational-model}}
}
```
| null |
Non_BioNLP
|
# Hindi Sentence Embeddings Model
This is a custom state-of-the-art sentence embedding model trained specifically for Hindi text. It leverages an advanced transformer architecture with specialized pooling strategies to create high-quality semantic representations of Hindi sentences.
## Features
- Specialized for Hindi language text
- Advanced transformer architecture with optimized attention mechanism
- Multiple pooling strategies for enhanced semantic representations
- Creates normalized vector representations for semantic similarity
- Supports semantic search and text similarity applications
## Usage
### Installation
```bash
pip install torch sentencepiece scikit-learn matplotlib
git lfs install
git clone https://huggingface.co/DeepMostInnovations/hindi-embedding-foundational-model
cd hindi-embedding-foundational-model
```
### Enhanced RAG System
This model now includes an enhanced RAG (Retrieval Augmented Generation) system that integrates Unsloth's optimized Llama-3.2-1B-Instruct model for question answering on top of Hindi document retrieval.
#### Setup and Installation
1. Install additional dependencies:
```bash
pip install unsloth transformers bitsandbytes accelerate langchain langchain-community faiss-cpu
```
2. Index your documents:
```bash
python hindi-rag-system.py --model_dir /path/to/your/model --tokenizer_dir /path/to/tokenizer --data_dir ./data --output_dir ./output --index
```
3. Run in QA mode with LLM:
```bash
python hindi-rag-system.py --model_dir /path/to/your/model --tokenizer_dir /path/to/tokenizer --output_dir ./output --interactive --qa
```
### Basic Embedding Usage
```python
from hindi_embeddings import HindiEmbedder
# Initialize the embedder
model = HindiEmbedder("path/to/hindi-embedding-foundational-model")
# Encode sentences to embeddings
sentences = [
"मुझे हिंदी भाषा बहुत पसंद है।",
"मैं हिंदी भाषा सीख रहा हूँ।"
]
embeddings = model.encode(sentences)
print(f"Embedding shape: {embeddings.shape}")
# Compute similarity between sentences
similarity = model.compute_similarity(sentences[0], sentences[1])
print(f"Similarity: {similarity:.4f}")
# Perform semantic search
query = "भारत की राजधानी"
documents = [
"दिल्ली भारत की राजधानी है।",
"मुंबई भारत का सबसे बड़ा शहर है।",
"हिमालय पर्वत भारत के उत्तर में स्थित है।"
]
results = model.search(query, documents)
for i, result in enumerate(results):
print(f"{i+1}. Score: {result['score']:.4f}")
print(f" Document: {result['document']}")
# Visualize embeddings
example_sentences = [
"मुझे हिंदी में पढ़ना बहुत पसंद है।",
"आज मौसम बहुत अच्छा है।",
"भारत एक विशाल देश है।"
]
model.visualize_embeddings(example_sentences)
```
## Model Details
This model uses an advanced transformer-based architecture with the following enhancements:
- Pre-layer normalization for stable training
- Specialized attention mechanism with relative positional encoding
- Multiple pooling strategies (weighted, mean, attention-based)
- L2-normalized vectors for cosine similarity
Technical specifications:
- Embedding dimension: 768
- Hidden dimension: 768
- Layers: 12
- Attention heads: 12
- Vocabulary size: 50,000
- Context length: 128 tokens
## Applications
- Semantic search and information retrieval
- Text clustering and categorization
- Recommendation systems
- Question answering
- Document similarity comparison
- Content-based filtering
- RAG systems for Hindi language content
## License
This model is released under the MIT License.
## Citation
If you use this model in your research or application, please cite us:
```
@misc{DeepMostInnovations2025hindi,
author = {DeepMost Innovations},
title = {Hindi Sentence Embeddings Model},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/DeepMostInnovations/hindi-embedding-foundational-model}}
}
```
|
{"datasets": ["custom"], "language": "hi", "library_name": "transformers", "license": "mit", "pipeline_tag": "sentence-similarity", "tags": ["hindi", "embeddings", "sentence-embeddings", "semantic-search", "text-similarity"]}
|
task
|
[
"QUESTION_ANSWERING",
"SEMANTIC_SIMILARITY"
] | 42,267 |
theahmadfaiq/distilbert-base-uncased-finetuned-emotion
|
theahmadfaiq
|
text-classification
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-10-15T20:04:44Z |
2023-10-15T20:11:16+00:00
| 93 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.923
name: Accuracy
- type: f1
value: 0.9229098207584335
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2191
- Accuracy: 0.923
- F1: 0.9229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8141 | 1.0 | 250 | 0.3131 | 0.9015 | 0.9003 |
| 0.2441 | 2.0 | 500 | 0.2191 | 0.923 | 0.9229 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2191
- Accuracy: 0.923
- F1: 0.9229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8141 | 1.0 | 250 | 0.3131 | 0.9015 | 0.9003 |
| 0.2441 | 2.0 | 500 | 0.2191 | 0.923 | 0.9229 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
{"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.923, "name": "Accuracy"}, {"type": "f1", "value": 0.9229098207584335, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,268 |
robertrengel/autotrain-traductor-en-es-2023-3608896670
|
robertrengel
|
translation
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autotrain",
"translation",
"unk",
"dataset:robertrengel/autotrain-data-traductor-en-es-2023",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-02-20T18:12:38Z |
2023-02-20T18:14:34+00:00
| 19 | 0 |
---
datasets:
- robertrengel/autotrain-data-traductor-en-es-2023
language:
- unk
- unk
tags:
- autotrain
- translation
co2_eq_emissions:
emissions: 2.5094872306394733
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 3608896670
- CO2 Emissions (in grams): 2.5095
## Validation Metrics
- Loss: 0.118
- SacreBLEU: 85.088
- Gen len: 10.172
| null |
Non_BioNLP
|
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 3608896670
- CO2 Emissions (in grams): 2.5095
## Validation Metrics
- Loss: 0.118
- SacreBLEU: 85.088
- Gen len: 10.172
|
{"datasets": ["robertrengel/autotrain-data-traductor-en-es-2023"], "language": ["unk", "unk"], "tags": ["autotrain", "translation"], "co2_eq_emissions": {"emissions": 2.5094872306394733}}
|
task
|
[
"TRANSLATION"
] | 42,269 |
milekaterine/borrar
|
milekaterine
|
summarization
|
[
"fairseq",
"climate",
"summarization",
"am",
"dataset:Open-Orca/OpenOrca",
"license:apache-2.0",
"region:us"
] | 2024-01-09T15:34:30Z |
2024-01-16T16:18:36+00:00
| 0 | 0 |
---
datasets:
- Open-Orca/OpenOrca
language:
- am
library_name: fairseq
license: apache-2.0
metrics:
- accuracy
pipeline_tag: summarization
tags:
- climate
---
| null |
Non_BioNLP
|
{"datasets": ["Open-Orca/OpenOrca"], "language": ["am"], "library_name": "fairseq", "license": "apache-2.0", "metrics": ["accuracy"], "pipeline_tag": "summarization", "tags": ["climate"]}
|
task
|
[
"SUMMARIZATION"
] | 42,270 |
|
pgajo/whisper-medium.en-nick
|
pgajo
|
automatic-speech-recognition
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"en",
"arxiv:2212.04356",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | 2024-02-29T02:02:07Z |
2024-02-29T08:31:49+00:00
| 5 | 0 |
---
language:
- en
license: apache-2.0
pipeline_tag: automatic-speech-recognition
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: whisper-medium.en
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- type: wer
value: 4.120542365210176
name: Test WER
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- type: wer
value: 7.431640255663553
name: Test WER
---
# Whisper
Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours
of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need
for fine-tuning.
Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356)
by Alec Radford et al. from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper).
**Disclaimer**: Content for this model card has partly been written by the Hugging Face team, and parts of it were
copied and pasted from the original model card.
## Model details
Whisper is a Transformer based encoder-decoder model, also referred to as a _sequence-to-sequence_ model.
It was trained on 680k hours of labelled speech data annotated using large-scale weak supervision.
The models were trained on either English-only data or multilingual data. The English-only models were trained
on the task of speech recognition. The multilingual models were trained on both speech recognition and speech
translation. For speech recognition, the model predicts transcriptions in the *same* language as the audio.
For speech translation, the model predicts transcriptions to a *different* language to the audio.
Whisper checkpoints come in five configurations of varying model sizes.
The smallest four are trained on either English-only or multilingual data.
The largest checkpoints are multilingual only. All ten of the pre-trained checkpoints
are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The
checkpoints are summarised in the following table with links to the models on the Hub:
| Size | Parameters | English-only | Multilingual |
|----------|------------|------------------------------------------------------|-----------------------------------------------------|
| tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) |
| base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) |
| small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) |
| medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) |
| large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) |
| large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) |
# Usage
This checkpoint is an *English-only* model, meaning it can be used for English speech recognition. Multilingual speech
recognition or speech translation is possible through use of a multilingual checkpoint.
To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor).
The `WhisperProcessor` is used to:
1. Pre-process the audio inputs (converting them to log-Mel spectrograms for the model)
2. Post-process the model outputs (converting them from tokens to text)
## Transcription
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-medium.en")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium.en")
>>> # load dummy dataset and read audio files
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False)
['<|startoftranscript|><|notimestamps|> Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.<|endoftext|>']
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.']
```
The context tokens can be removed from the start of the transcription by setting `skip_special_tokens=True`.
## Evaluation
This code snippet shows how to evaluate Whisper medium.en on [LibriSpeech test-clean](https://huggingface.co/datasets/librispeech_asr):
```python
>>> from datasets import load_dataset
>>> from transformers import WhisperForConditionalGeneration, WhisperProcessor
>>> import torch
>>> from evaluate import load
>>> librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test")
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-medium.en")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium.en").to("cuda")
>>> def map_to_pred(batch):
>>> audio = batch["audio"]
>>> input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features
>>> batch["reference"] = processor.tokenizer._normalize(batch['text'])
>>>
>>> with torch.no_grad():
>>> predicted_ids = model.generate(input_features.to("cuda"))[0]
>>> transcription = processor.decode(predicted_ids)
>>> batch["prediction"] = processor.tokenizer._normalize(transcription)
>>> return batch
>>> result = librispeech_test_clean.map(map_to_pred)
>>> wer = load("wer")
>>> print(100 * wer.compute(references=result["reference"], predictions=result["prediction"]))
3.0154449620004904
```
## Long-Form Transcription
The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking
algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers
[`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. With chunking enabled, the pipeline
can be run with batched inference. It can also be extended to predict sequence level timestamps by passing `return_timestamps=True`:
```python
>>> import torch
>>> from transformers import pipeline
>>> from datasets import load_dataset
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> pipe = pipeline(
>>> "automatic-speech-recognition",
>>> model="openai/whisper-medium.en",
>>> chunk_length_s=30,
>>> device=device,
>>> )
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> prediction = pipe(sample.copy(), batch_size=8)["text"]
" Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel."
>>> # we can also return timestamps for the predictions
>>> prediction = pipe(sample.copy(), batch_size=8, return_timestamps=True)["chunks"]
[{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.',
'timestamp': (0.0, 5.44)}]
```
Refer to the blog post [ASR Chunking](https://huggingface.co/blog/asr-chunking) for more details on the chunking algorithm.
## Fine-Tuning
The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However,
its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog
post [Fine-Tune Whisper with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper) provides a step-by-step
guide to fine-tuning the Whisper model with as little as 5 hours of labelled data.
### Evaluated Use
The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research.
The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them.
In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes.
## Training Data
The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages.
As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language.
## Performance and Limitations
Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level.
However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself.
Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf).
In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages.
## Broader Implications
We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications.
There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects.
### BibTeX entry and citation info
```bibtex
@misc{radford2022whisper,
doi = {10.48550/ARXIV.2212.04356},
url = {https://arxiv.org/abs/2212.04356},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
| null |
Non_BioNLP
|
# Whisper
Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours
of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need
for fine-tuning.
Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356)
by Alec Radford et al. from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper).
**Disclaimer**: Content for this model card has partly been written by the Hugging Face team, and parts of it were
copied and pasted from the original model card.
## Model details
Whisper is a Transformer based encoder-decoder model, also referred to as a _sequence-to-sequence_ model.
It was trained on 680k hours of labelled speech data annotated using large-scale weak supervision.
The models were trained on either English-only data or multilingual data. The English-only models were trained
on the task of speech recognition. The multilingual models were trained on both speech recognition and speech
translation. For speech recognition, the model predicts transcriptions in the *same* language as the audio.
For speech translation, the model predicts transcriptions to a *different* language to the audio.
Whisper checkpoints come in five configurations of varying model sizes.
The smallest four are trained on either English-only or multilingual data.
The largest checkpoints are multilingual only. All ten of the pre-trained checkpoints
are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The
checkpoints are summarised in the following table with links to the models on the Hub:
| Size | Parameters | English-only | Multilingual |
|----------|------------|------------------------------------------------------|-----------------------------------------------------|
| tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) |
| base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) |
| small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) |
| medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) |
| large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) |
| large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) |
# Usage
This checkpoint is an *English-only* model, meaning it can be used for English speech recognition. Multilingual speech
recognition or speech translation is possible through use of a multilingual checkpoint.
To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor).
The `WhisperProcessor` is used to:
1. Pre-process the audio inputs (converting them to log-Mel spectrograms for the model)
2. Post-process the model outputs (converting them from tokens to text)
## Transcription
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-medium.en")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium.en")
>>> # load dummy dataset and read audio files
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False)
['<|startoftranscript|><|notimestamps|> Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.<|endoftext|>']
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.']
```
The context tokens can be removed from the start of the transcription by setting `skip_special_tokens=True`.
## Evaluation
This code snippet shows how to evaluate Whisper medium.en on [LibriSpeech test-clean](https://huggingface.co/datasets/librispeech_asr):
```python
>>> from datasets import load_dataset
>>> from transformers import WhisperForConditionalGeneration, WhisperProcessor
>>> import torch
>>> from evaluate import load
>>> librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test")
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-medium.en")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium.en").to("cuda")
>>> def map_to_pred(batch):
>>> audio = batch["audio"]
>>> input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features
>>> batch["reference"] = processor.tokenizer._normalize(batch['text'])
>>>
>>> with torch.no_grad():
>>> predicted_ids = model.generate(input_features.to("cuda"))[0]
>>> transcription = processor.decode(predicted_ids)
>>> batch["prediction"] = processor.tokenizer._normalize(transcription)
>>> return batch
>>> result = librispeech_test_clean.map(map_to_pred)
>>> wer = load("wer")
>>> print(100 * wer.compute(references=result["reference"], predictions=result["prediction"]))
3.0154449620004904
```
## Long-Form Transcription
The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking
algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers
[`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. With chunking enabled, the pipeline
can be run with batched inference. It can also be extended to predict sequence level timestamps by passing `return_timestamps=True`:
```python
>>> import torch
>>> from transformers import pipeline
>>> from datasets import load_dataset
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> pipe = pipeline(
>>> "automatic-speech-recognition",
>>> model="openai/whisper-medium.en",
>>> chunk_length_s=30,
>>> device=device,
>>> )
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> prediction = pipe(sample.copy(), batch_size=8)["text"]
" Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel."
>>> # we can also return timestamps for the predictions
>>> prediction = pipe(sample.copy(), batch_size=8, return_timestamps=True)["chunks"]
[{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.',
'timestamp': (0.0, 5.44)}]
```
Refer to the blog post [ASR Chunking](https://huggingface.co/blog/asr-chunking) for more details on the chunking algorithm.
## Fine-Tuning
The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However,
its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog
post [Fine-Tune Whisper with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper) provides a step-by-step
guide to fine-tuning the Whisper model with as little as 5 hours of labelled data.
### Evaluated Use
The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research.
The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them.
In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes.
## Training Data
The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages.
As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language.
## Performance and Limitations
Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level.
However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself.
Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf).
In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages.
## Broader Implications
We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications.
There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects.
### BibTeX entry and citation info
```bibtex
@misc{radford2022whisper,
doi = {10.48550/ARXIV.2212.04356},
url = {https://arxiv.org/abs/2212.04356},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
|
{"language": ["en"], "license": "apache-2.0", "pipeline_tag": "automatic-speech-recognition", "tags": ["audio", "automatic-speech-recognition", "hf-asr-leaderboard"], "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}, {"example_title": "Librispeech sample 2", "src": "https://cdn-media.huggingface.co/speech_samples/sample2.flac"}], "model-index": [{"name": "whisper-medium.en", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "LibriSpeech (clean)", "type": "librispeech_asr", "config": "clean", "split": "test", "args": {"language": "en"}}, "metrics": [{"type": "wer", "value": 4.120542365210176, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "LibriSpeech (other)", "type": "librispeech_asr", "config": "other", "split": "test", "args": {"language": "en"}}, "metrics": [{"type": "wer", "value": 7.431640255663553, "name": "Test WER"}]}]}]}
|
task
|
[
"TRANSLATION"
] | 42,271 |
TransferGraph/riyadhctg_distilbert-base-uncased-finetuned-cola-finetuned-lora-tweet_eval_emotion
|
TransferGraph
|
text-classification
|
[
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:riyadhctg/distilbert-base-uncased-finetuned-cola",
"base_model:adapter:riyadhctg/distilbert-base-uncased-finetuned-cola",
"license:apache-2.0",
"model-index",
"region:us"
] | 2024-02-29T12:47:36Z |
2024-02-29T12:47:42+00:00
| 0 | 0 |
---
base_model: riyadhctg/distilbert-base-uncased-finetuned-cola
datasets:
- tweet_eval
library_name: peft
license: apache-2.0
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: riyadhctg_distilbert-base-uncased-finetuned-cola-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.7299465240641712
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# riyadhctg_distilbert-base-uncased-finetuned-cola-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [riyadhctg/distilbert-base-uncased-finetuned-cola](https://huggingface.co/riyadhctg/distilbert-base-uncased-finetuned-cola) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2406 | None | 0 |
| 0.6123 | 1.1167 | 0 |
| 0.7139 | 0.7458 | 1 |
| 0.7299 | 0.6476 | 2 |
| 0.7299 | 0.6153 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# riyadhctg_distilbert-base-uncased-finetuned-cola-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [riyadhctg/distilbert-base-uncased-finetuned-cola](https://huggingface.co/riyadhctg/distilbert-base-uncased-finetuned-cola) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2406 | None | 0 |
| 0.6123 | 1.1167 | 0 |
| 0.7139 | 0.7458 | 1 |
| 0.7299 | 0.6476 | 2 |
| 0.7299 | 0.6153 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
|
{"base_model": "riyadhctg/distilbert-base-uncased-finetuned-cola", "datasets": ["tweet_eval"], "library_name": "peft", "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "riyadhctg_distilbert-base-uncased-finetuned-cola-finetuned-lora-tweet_eval_emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "config": "emotion", "split": "validation", "args": "emotion"}, "metrics": [{"type": "accuracy", "value": 0.7299465240641712, "name": "accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,272 |
GuenterBlaeser/distilbert-base-uncased-finetuned-emotion
|
GuenterBlaeser
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-10-15T13:09:43Z |
2023-10-15T14:57:58+00:00
| 93 | 0 |
---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- type: accuracy
value: 0.927
name: Accuracy
- type: f1
value: 0.9269163771502267
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2136
- Accuracy: 0.927
- F1: 0.9269
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8415 | 1.0 | 250 | 0.3139 | 0.9075 | 0.9047 |
| 0.2506 | 2.0 | 500 | 0.2136 | 0.927 | 0.9269 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.14.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2136
- Accuracy: 0.927
- F1: 0.9269
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8415 | 1.0 | 250 | 0.3139 | 0.9075 | 0.9047 |
| 0.2506 | 2.0 | 500 | 0.2136 | 0.927 | 0.9269 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.14.1
|
{"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.927, "name": "Accuracy"}, {"type": "f1", "value": 0.9269163771502267, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,273 |
henryk/bert-base-multilingual-cased-finetuned-dutch-squad2
|
henryk
|
question-answering
|
[
"transformers",
"pytorch",
"jax",
"bert",
"question-answering",
"nl",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2021-05-19T19:02:45+00:00
| 199 | 6 |
---
language: nl
---
# Multilingual + Dutch SQuAD2.0
This model is the multilingual model provided by the Google research team with a fine-tuned dutch Q&A downstream task.
## Details of the language model
Language model ([**bert-base-multilingual-cased**](https://github.com/google-research/bert/blob/master/multilingual.md)):
12-layer, 768-hidden, 12-heads, 110M parameters.
Trained on cased text in the top 104 languages with the largest Wikipedias.
## Details of the downstream task
Using the `mtranslate` Python module, [**SQuAD2.0**](https://rajpurkar.github.io/SQuAD-explorer/) was machine-translated. In order to find the start tokens, the direct translations of the answers were searched in the corresponding paragraphs. Due to the different translations depending on the context (missing context in the pure answer), the answer could not always be found in the text, and thus a loss of question-answer examples occurred. This is a potential problem where errors can occur in the data set.
| Dataset | # Q&A |
| ---------------------- | ----- |
| SQuAD2.0 Train | 130 K |
| Dutch SQuAD2.0 Train | 99 K |
| SQuAD2.0 Dev | 12 K |
| Dutch SQuAD2.0 Dev | 10 K |
## Model benchmark
| Model | EM/F1 |HasAns (EM/F1) | NoAns |
| ---------------------- | ----- | ----- | ----- |
| [robBERT](https://huggingface.co/pdelobelle/robBERT-base) | 58.04/60.95 | 33.08/40.64 | 73.67 |
| [dutchBERT](https://huggingface.co/wietsedv/bert-base-dutch-cased) | 64.25/68.45 | 45.59/56.49 | 75.94 |
| [multiBERT](https://huggingface.co/bert-base-multilingual-cased) | **67.38**/**71.36** | 47.42/57.76 | 79.88 |
## Model training
The model was trained on a **Tesla V100** GPU with the following command:
```python
export SQUAD_DIR=path/to/nl_squad
python run_squad.py
--model_type bert \
--model_name_or_path bert-base-multilingual-cased \
--do_train \
--do_eval \
--train_file $SQUAD_DIR/nl_squadv2_train_clean.json \
--predict_file $SQUAD_DIR/nl_squadv2_dev_clean.json \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps=8000 \
--output_dir ../../output \
--overwrite_cache \
--overwrite_output_dir
```
**Results**:
{'exact': 67.38028751680629, 'f1': 71.362297054268, 'total': 9669, 'HasAns_exact': 47.422126745435015, 'HasAns_f1': 57.761023151910734, 'HasAns_total': 3724, 'NoAns_exact': 79.88225399495374, 'NoAns_f1': 79.88225399495374, 'NoAns_total': 5945, 'best_exact': 67.53542248422795, 'best_exact_thresh': 0.0, 'best_f1': 71.36229705426837, 'best_f1_thresh': 0.0}
## Model in action
Fast usage with **pipelines**:
```python
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="henryk/bert-base-multilingual-cased-finetuned-dutch-squad2",
tokenizer="henryk/bert-base-multilingual-cased-finetuned-dutch-squad2"
)
qa_pipeline({
'context': "Amsterdam is de hoofdstad en de dichtstbevolkte stad van Nederland.",
'question': "Wat is de hoofdstad van Nederland?"})
```
# Output:
```json
{
"score": 0.83,
"start": 0,
"end": 9,
"answer": "Amsterdam"
}
```
## Contact
Please do not hesitate to contact me via [LinkedIn](https://www.linkedin.com/in/henryk-borzymowski-0755a2167/) if you want to discuss or get access to the Dutch version of SQuAD.
| null |
Non_BioNLP
|
# Multilingual + Dutch SQuAD2.0
This model is the multilingual model provided by the Google research team with a fine-tuned dutch Q&A downstream task.
## Details of the language model
Language model ([**bert-base-multilingual-cased**](https://github.com/google-research/bert/blob/master/multilingual.md)):
12-layer, 768-hidden, 12-heads, 110M parameters.
Trained on cased text in the top 104 languages with the largest Wikipedias.
## Details of the downstream task
Using the `mtranslate` Python module, [**SQuAD2.0**](https://rajpurkar.github.io/SQuAD-explorer/) was machine-translated. In order to find the start tokens, the direct translations of the answers were searched in the corresponding paragraphs. Due to the different translations depending on the context (missing context in the pure answer), the answer could not always be found in the text, and thus a loss of question-answer examples occurred. This is a potential problem where errors can occur in the data set.
| Dataset | # Q&A |
| ---------------------- | ----- |
| SQuAD2.0 Train | 130 K |
| Dutch SQuAD2.0 Train | 99 K |
| SQuAD2.0 Dev | 12 K |
| Dutch SQuAD2.0 Dev | 10 K |
## Model benchmark
| Model | EM/F1 |HasAns (EM/F1) | NoAns |
| ---------------------- | ----- | ----- | ----- |
| [robBERT](https://huggingface.co/pdelobelle/robBERT-base) | 58.04/60.95 | 33.08/40.64 | 73.67 |
| [dutchBERT](https://huggingface.co/wietsedv/bert-base-dutch-cased) | 64.25/68.45 | 45.59/56.49 | 75.94 |
| [multiBERT](https://huggingface.co/bert-base-multilingual-cased) | **67.38**/**71.36** | 47.42/57.76 | 79.88 |
## Model training
The model was trained on a **Tesla V100** GPU with the following command:
```python
export SQUAD_DIR=path/to/nl_squad
python run_squad.py
--model_type bert \
--model_name_or_path bert-base-multilingual-cased \
--do_train \
--do_eval \
--train_file $SQUAD_DIR/nl_squadv2_train_clean.json \
--predict_file $SQUAD_DIR/nl_squadv2_dev_clean.json \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps=8000 \
--output_dir ../../output \
--overwrite_cache \
--overwrite_output_dir
```
**Results**:
{'exact': 67.38028751680629, 'f1': 71.362297054268, 'total': 9669, 'HasAns_exact': 47.422126745435015, 'HasAns_f1': 57.761023151910734, 'HasAns_total': 3724, 'NoAns_exact': 79.88225399495374, 'NoAns_f1': 79.88225399495374, 'NoAns_total': 5945, 'best_exact': 67.53542248422795, 'best_exact_thresh': 0.0, 'best_f1': 71.36229705426837, 'best_f1_thresh': 0.0}
## Model in action
Fast usage with **pipelines**:
```python
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="henryk/bert-base-multilingual-cased-finetuned-dutch-squad2",
tokenizer="henryk/bert-base-multilingual-cased-finetuned-dutch-squad2"
)
qa_pipeline({
'context': "Amsterdam is de hoofdstad en de dichtstbevolkte stad van Nederland.",
'question': "Wat is de hoofdstad van Nederland?"})
```
# Output:
```json
{
"score": 0.83,
"start": 0,
"end": 9,
"answer": "Amsterdam"
}
```
## Contact
Please do not hesitate to contact me via [LinkedIn](https://www.linkedin.com/in/henryk-borzymowski-0755a2167/) if you want to discuss or get access to the Dutch version of SQuAD.
|
{"language": "nl"}
|
task
|
[
"TRANSLATION"
] | 42,274 |
LinguaCustodia/fin-pythia-1.4b
|
LinguaCustodia
|
text-generation
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"sentiment analysis",
"financial sentiment analysis",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-10-23T13:03:56Z |
2023-10-26T11:44:28+00:00
| 25 | 4 |
---
license: apache-2.0
tags:
- sentiment analysis
- financial sentiment analysis
---
Fin-Pythia-1.4B is an instruction-finetuned model for sentiment analysis of financial text. It is built by 1) further training Pythia-1.4B model on financial documents, then 2) instruction fine-tuning on financial tasks. Although, the model is designed to be used for sentiment analysis, it performs well on other tasks such as named entity recognition (check our FinNLP 2023 paper). Fin-Pythia-1.4B's performance on financial sentiment analysis is on par with much larger financial LLMs and exceeds the performance of general models like GPT-4:
| Models | FPB | FIQA-SA | Headlines | NER |
|--------------|--------|---------|-----------|--------|
| BloombergGPT | 0.51 | 0.75 | 0.82 | 0.61 |
| GPT-4 | 0.78 | - | 0.86 | 0.83 |
| FinMA-7B | 0.86 | 0.84 | 0.98 | 0.75 |
| FinMA-30B | 0.88 | 0.87 | 0.97 | 0.62 |
| Pythia-1.4B | 0.84 | 0.83 | 0.97 | 0.69 |
---
Usage
Your instruction should follow this format:
```
prompt = "\n".join([
'### Instruction: YOUR_INSTRUCTION',
'### Text: YOUR_SENTENCE',
'### Answer:'])
```
For example:
`### Instruction: Analyze the sentiment of this statement extracted from a financial news article. Provide your answer as either negative, positive, or neutral.\n### Text: The economic uncertainty caused by the ongoing trade tensions between major global economies has led to a sharp decline in investor confidence, resulting in a significant drop in the stock market.\n### Answer:`
You could also force the model to generate only the sentiment tokens using the following example code:
```python
prompt = "### Instruction: Analyze the sentiment of this statement extracted from a financial news article. Provide your answer as either negative, positive, or neutral.\n### Text: XYZ reported record-breaking profits for the quarter, exceeding analyst expectations and driving their stock price to new highs.\n### Answer:"
target_classes = ["positive", "negative", "neutral"]
target_class_ids = tokenizer.convert_tokens_to_ids(target_classes)
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to(args.device)
outputs = model(inputs.input_ids)
top_output = outputs.logits[0][-1][target_class_ids].argmax(dim=0)
print(target_classes[top_output])
```
## Citation
```
@misc{lc_finnlp2023,
title={Large Language Model Adaptation for Financial Sentiment Analysis},
author={Rodriguez Inserte Pau and Nakhlé Mariam and Qader Raheel and Caillaut Gaëtan and Liu Jingshu},
year={2023},
}
```
---
About Lingua Custodia
Lingua Custodia is a Paris based Fintech company leader in Natural Language Processing (NLP) for Finance. It was created in 2011 by finance professionals to initially offer specialized machine translation.
Leveraging its state-of-the-art NLP expertise, the company now offers a growing range of applications in addition to its initial Machine translation offering: Speech-to-Text automation, Document classification, Linguistic data extraction from unstructured documents, Mass web
crawling and data collection, ... and achieves superior quality thanks to highly domain-focused machine learning algorithms.
Contact information
contact[at]linguacustodia[dot]com
| null |
Non_BioNLP
|
Fin-Pythia-1.4B is an instruction-finetuned model for sentiment analysis of financial text. It is built by 1) further training Pythia-1.4B model on financial documents, then 2) instruction fine-tuning on financial tasks. Although, the model is designed to be used for sentiment analysis, it performs well on other tasks such as named entity recognition (check our FinNLP 2023 paper). Fin-Pythia-1.4B's performance on financial sentiment analysis is on par with much larger financial LLMs and exceeds the performance of general models like GPT-4:
| Models | FPB | FIQA-SA | Headlines | NER |
|--------------|--------|---------|-----------|--------|
| BloombergGPT | 0.51 | 0.75 | 0.82 | 0.61 |
| GPT-4 | 0.78 | - | 0.86 | 0.83 |
| FinMA-7B | 0.86 | 0.84 | 0.98 | 0.75 |
| FinMA-30B | 0.88 | 0.87 | 0.97 | 0.62 |
| Pythia-1.4B | 0.84 | 0.83 | 0.97 | 0.69 |
---
Usage
Your instruction should follow this format:
```
prompt = "\n".join([
'### Instruction: YOUR_INSTRUCTION',
'### Text: YOUR_SENTENCE',
'### Answer:'])
```
For example:
`### Instruction: Analyze the sentiment of this statement extracted from a financial news article. Provide your answer as either negative, positive, or neutral.\n### Text: The economic uncertainty caused by the ongoing trade tensions between major global economies has led to a sharp decline in investor confidence, resulting in a significant drop in the stock market.\n### Answer:`
You could also force the model to generate only the sentiment tokens using the following example code:
```python
prompt = "### Instruction: Analyze the sentiment of this statement extracted from a financial news article. Provide your answer as either negative, positive, or neutral.\n### Text: XYZ reported record-breaking profits for the quarter, exceeding analyst expectations and driving their stock price to new highs.\n### Answer:"
target_classes = ["positive", "negative", "neutral"]
target_class_ids = tokenizer.convert_tokens_to_ids(target_classes)
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to(args.device)
outputs = model(inputs.input_ids)
top_output = outputs.logits[0][-1][target_class_ids].argmax(dim=0)
print(target_classes[top_output])
```
## Citation
```
@misc{lc_finnlp2023,
title={Large Language Model Adaptation for Financial Sentiment Analysis},
author={Rodriguez Inserte Pau and Nakhlé Mariam and Qader Raheel and Caillaut Gaëtan and Liu Jingshu},
year={2023},
}
```
---
About Lingua Custodia
Lingua Custodia is a Paris based Fintech company leader in Natural Language Processing (NLP) for Finance. It was created in 2011 by finance professionals to initially offer specialized machine translation.
Leveraging its state-of-the-art NLP expertise, the company now offers a growing range of applications in addition to its initial Machine translation offering: Speech-to-Text automation, Document classification, Linguistic data extraction from unstructured documents, Mass web
crawling and data collection, ... and achieves superior quality thanks to highly domain-focused machine learning algorithms.
Contact information
contact[at]linguacustodia[dot]com
|
{"license": "apache-2.0", "tags": ["sentiment analysis", "financial sentiment analysis"]}
|
task
|
[
"NAMED_ENTITY_RECOGNITION",
"TRANSLATION"
] | 42,275 |
deepset/xlm-roberta-large-squad2
|
deepset
|
question-answering
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"question-answering",
"multilingual",
"dataset:squad_v2",
"license:cc-by-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2024-09-26T10:53:55+00:00
| 6,269 | 49 |
---
datasets:
- squad_v2
language: multilingual
license: cc-by-4.0
tags:
- question-answering
model-index:
- name: deepset/xlm-roberta-large-squad2
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_v2
type: squad_v2
config: squad_v2
split: validation
metrics:
- type: exact_match
value: 81.8281
name: Exact Match
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzVhZDE2NTg5NmUwOWRkMmI2MGUxYjFlZjIzNmMyNDQ2MDY2MDNhYzE0ZjY5YTkyY2U4ODc3ODFiZjQxZWQ2YSIsInZlcnNpb24iOjF9.f_rN3WPMAdv-OBPz0T7N7lOxYz9f1nEr_P-vwKhi3jNdRKp_JTy18MYR9eyJM2riKHC6_ge-8XwfyrUf51DSDA
- type: f1
value: 84.8886
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGE5MWJmZGUxMGMwNWFhYzVhZjQwZGEwOWQ4N2Q2Yjg5NzdjNDFiNDhiYTQ1Y2E5ZWJkOTFhYmI1Y2Q2ZGYwOCIsInZlcnNpb24iOjF9.TIdH-tOx3kEMDs5wK1r6iwZqqSjNGlBrpawrsE917j1F3UFJVnQ7wJwaj0OIgmC4iw8OQeLZL56ucBcLApa-AQ
---
# Multilingual XLM-RoBERTa large for Extractive QA on various languages
## Overview
**Language model:** xlm-roberta-large
**Language:** Multilingual
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD dev set - German MLQA - German XQuAD
**Training run:** [MLFlow link](https://public-mlflow.deepset.ai/#/experiments/124/runs/3a540e3f3ecf4dd98eae8fc6d457ff20)
**Code:** See [an example extractive QA pipeline built with Haystack](https://haystack.deepset.ai/tutorials/34_extractive_qa_pipeline)
**Infrastructure**: 4x Tesla v100
## Hyperparameters
```
batch_size = 32
n_epochs = 3
base_LM_model = "xlm-roberta-large"
max_seq_len = 256
learning_rate = 1e-5
lr_schedule = LinearWarmup
warmup_proportion = 0.2
doc_stride=128
max_query_length=64
```
## Usage
### In Haystack
Haystack is an AI orchestration framework to build customizable, production-ready LLM applications. You can use this model in Haystack to do extractive question answering on documents.
To load and run the model with [Haystack](https://github.com/deepset-ai/haystack/):
```python
# After running pip install haystack-ai "transformers[torch,sentencepiece]"
from haystack import Document
from haystack.components.readers import ExtractiveReader
docs = [
Document(content="Python is a popular programming language"),
Document(content="python ist eine beliebte Programmiersprache"),
]
reader = ExtractiveReader(model="deepset/xlm-roberta-large-squad2")
reader.warm_up()
question = "What is a popular programming language?"
result = reader.run(query=question, documents=docs)
# {'answers': [ExtractedAnswer(query='What is a popular programming language?', score=0.5740374326705933, data='python', document=Document(id=..., content: '...'), context=None, document_offset=ExtractedAnswer.Span(start=0, end=6),...)]}
```
For a complete example with an extractive question answering pipeline that scales over many documents, check out the [corresponding Haystack tutorial](https://haystack.deepset.ai/tutorials/34_extractive_qa_pipeline).
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/xlm-roberta-large-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Performance
Evaluated on the SQuAD 2.0 English dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
```
"exact": 79.45759285774446,
"f1": 83.79259828925511,
"total": 11873,
"HasAns_exact": 71.96356275303644,
"HasAns_f1": 80.6460053117963,
"HasAns_total": 5928,
"NoAns_exact": 86.93019343986543,
"NoAns_f1": 86.93019343986543,
"NoAns_total": 5945
```
Evaluated on German [MLQA: test-context-de-question-de.json](https://github.com/facebookresearch/MLQA)
```
"exact": 49.34691166703564,
"f1": 66.15582561674236,
"total": 4517,
```
Evaluated on German [XQuAD: xquad.de.json](https://github.com/deepmind/xquad)
```
"exact": 61.51260504201681,
"f1": 78.80206098332569,
"total": 1190,
```
## Usage
### In Haystack
For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/xlm-roberta-large-squad2")
# or
reader = TransformersReader(model="deepset/xlm-roberta-large-squad2",tokenizer="deepset/xlm-roberta-large-squad2")
```
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/xlm-roberta-large-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
**Branden Chan:** [email protected]
**Timo Möller:** [email protected]
**Malte Pietsch:** [email protected]
**Tanay Soni:** [email protected]
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
</div>
</div>
[deepset](http://deepset.ai/) is the company behind the production-ready open-source AI framework [Haystack](https://haystack.deepset.ai/).
Some of our other work:
- [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")](https://huggingface.co/deepset/tinyroberta-squad2)
- [German BERT](https://deepset.ai/german-bert), [GermanQuAD and GermanDPR](https://deepset.ai/germanquad), [German embedding model](https://huggingface.co/mixedbread-ai/deepset-mxbai-embed-de-large-v1)
- [deepset Cloud](https://www.deepset.ai/deepset-cloud-product), [deepset Studio](https://www.deepset.ai/deepset-studio)
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p>
[Twitter](https://twitter.com/Haystack_AI) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://haystack.deepset.ai/) | [YouTube](https://www.youtube.com/@deepset_ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
| null |
Non_BioNLP
|
# Multilingual XLM-RoBERTa large for Extractive QA on various languages
## Overview
**Language model:** xlm-roberta-large
**Language:** Multilingual
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD dev set - German MLQA - German XQuAD
**Training run:** [MLFlow link](https://public-mlflow.deepset.ai/#/experiments/124/runs/3a540e3f3ecf4dd98eae8fc6d457ff20)
**Code:** See [an example extractive QA pipeline built with Haystack](https://haystack.deepset.ai/tutorials/34_extractive_qa_pipeline)
**Infrastructure**: 4x Tesla v100
## Hyperparameters
```
batch_size = 32
n_epochs = 3
base_LM_model = "xlm-roberta-large"
max_seq_len = 256
learning_rate = 1e-5
lr_schedule = LinearWarmup
warmup_proportion = 0.2
doc_stride=128
max_query_length=64
```
## Usage
### In Haystack
Haystack is an AI orchestration framework to build customizable, production-ready LLM applications. You can use this model in Haystack to do extractive question answering on documents.
To load and run the model with [Haystack](https://github.com/deepset-ai/haystack/):
```python
# After running pip install haystack-ai "transformers[torch,sentencepiece]"
from haystack import Document
from haystack.components.readers import ExtractiveReader
docs = [
Document(content="Python is a popular programming language"),
Document(content="python ist eine beliebte Programmiersprache"),
]
reader = ExtractiveReader(model="deepset/xlm-roberta-large-squad2")
reader.warm_up()
question = "What is a popular programming language?"
result = reader.run(query=question, documents=docs)
# {'answers': [ExtractedAnswer(query='What is a popular programming language?', score=0.5740374326705933, data='python', document=Document(id=..., content: '...'), context=None, document_offset=ExtractedAnswer.Span(start=0, end=6),...)]}
```
For a complete example with an extractive question answering pipeline that scales over many documents, check out the [corresponding Haystack tutorial](https://haystack.deepset.ai/tutorials/34_extractive_qa_pipeline).
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/xlm-roberta-large-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Performance
Evaluated on the SQuAD 2.0 English dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
```
"exact": 79.45759285774446,
"f1": 83.79259828925511,
"total": 11873,
"HasAns_exact": 71.96356275303644,
"HasAns_f1": 80.6460053117963,
"HasAns_total": 5928,
"NoAns_exact": 86.93019343986543,
"NoAns_f1": 86.93019343986543,
"NoAns_total": 5945
```
Evaluated on German [MLQA: test-context-de-question-de.json](https://github.com/facebookresearch/MLQA)
```
"exact": 49.34691166703564,
"f1": 66.15582561674236,
"total": 4517,
```
Evaluated on German [XQuAD: xquad.de.json](https://github.com/deepmind/xquad)
```
"exact": 61.51260504201681,
"f1": 78.80206098332569,
"total": 1190,
```
## Usage
### In Haystack
For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/xlm-roberta-large-squad2")
# or
reader = TransformersReader(model="deepset/xlm-roberta-large-squad2",tokenizer="deepset/xlm-roberta-large-squad2")
```
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/xlm-roberta-large-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
**Branden Chan:** [email protected]
**Timo Möller:** [email protected]
**Malte Pietsch:** [email protected]
**Tanay Soni:** [email protected]
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
</div>
</div>
[deepset](http://deepset.ai/) is the company behind the production-ready open-source AI framework [Haystack](https://haystack.deepset.ai/).
Some of our other work:
- [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")](https://huggingface.co/deepset/tinyroberta-squad2)
- [German BERT](https://deepset.ai/german-bert), [GermanQuAD and GermanDPR](https://deepset.ai/germanquad), [German embedding model](https://huggingface.co/mixedbread-ai/deepset-mxbai-embed-de-large-v1)
- [deepset Cloud](https://www.deepset.ai/deepset-cloud-product), [deepset Studio](https://www.deepset.ai/deepset-studio)
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p>
[Twitter](https://twitter.com/Haystack_AI) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://haystack.deepset.ai/) | [YouTube](https://www.youtube.com/@deepset_ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
{"datasets": ["squad_v2"], "language": "multilingual", "license": "cc-by-4.0", "tags": ["question-answering"], "model-index": [{"name": "deepset/xlm-roberta-large-squad2", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "config": "squad_v2", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 81.8281, "name": "Exact Match", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzVhZDE2NTg5NmUwOWRkMmI2MGUxYjFlZjIzNmMyNDQ2MDY2MDNhYzE0ZjY5YTkyY2U4ODc3ODFiZjQxZWQ2YSIsInZlcnNpb24iOjF9.f_rN3WPMAdv-OBPz0T7N7lOxYz9f1nEr_P-vwKhi3jNdRKp_JTy18MYR9eyJM2riKHC6_ge-8XwfyrUf51DSDA"}, {"type": "f1", "value": 84.8886, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGE5MWJmZGUxMGMwNWFhYzVhZjQwZGEwOWQ4N2Q2Yjg5NzdjNDFiNDhiYTQ1Y2E5ZWJkOTFhYmI1Y2Q2ZGYwOCIsInZlcnNpb24iOjF9.TIdH-tOx3kEMDs5wK1r6iwZqqSjNGlBrpawrsE917j1F3UFJVnQ7wJwaj0OIgmC4iw8OQeLZL56ucBcLApa-AQ"}]}]}]}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,276 |
FINGU-AI/L3-78b-Large-v1
|
FINGU-AI
| null |
[
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | 2025-01-20T07:06:16Z |
2025-02-02T12:57:39+00:00
| 5 | 1 |
---
license: apache-2.0
---
# FINGU-AI/L3-78b-Large-v1
## Overview
`FINGU-AI/L3-78b-Large-v1` is a powerful causal language model designed for a variety of natural language processing (NLP) tasks, including machine translation, text generation, and chat-based applications. This model is particularly useful for translating between languages, as well as supporting other custom NLP tasks through flexible input.
## Example Usage
### Installation
Make sure to install the required packages:
```bash
pip install torch transformers
```
### Loading the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Model and Tokenizer
model_id = 'FINGU-AI/L3-78b-Large-v1'
model = AutoModelForCausalLM.from_pretrained(model_id, attn_implementation="sdpa", torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model.to('cuda')
# Input Messages for Translation
messages = [
{"role": "system", "content": "translate korean to Uzbek"},
{"role": "user", "content": """새로운 은행 계좌를 개설하는 절차는 다음과 같습니다:
1. 계좌 개설 목적과 신분 확인을 위한 서류 제출
2. 서류 검토 과정을 거치는 것
3. 고객님의 신원 확인 절차를 진행하는 것
4. 모든 절차가 완료되면 계좌 개설이 가능합니다.
계좌 개설을 원하시는 경우, 신분증과 함께 방문해 주시면 됩니다.
"""},
]
# Tokenize and Generate Response
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to('cuda')
outputs = model.generate(
input_ids,
max_new_tokens=500,
do_sample=True,
)
# Decode and Print the Translation
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
| null |
Non_BioNLP
|
# FINGU-AI/L3-78b-Large-v1
## Overview
`FINGU-AI/L3-78b-Large-v1` is a powerful causal language model designed for a variety of natural language processing (NLP) tasks, including machine translation, text generation, and chat-based applications. This model is particularly useful for translating between languages, as well as supporting other custom NLP tasks through flexible input.
## Example Usage
### Installation
Make sure to install the required packages:
```bash
pip install torch transformers
```
### Loading the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Model and Tokenizer
model_id = 'FINGU-AI/L3-78b-Large-v1'
model = AutoModelForCausalLM.from_pretrained(model_id, attn_implementation="sdpa", torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model.to('cuda')
# Input Messages for Translation
messages = [
{"role": "system", "content": "translate korean to Uzbek"},
{"role": "user", "content": """새로운 은행 계좌를 개설하는 절차는 다음과 같습니다:
1. 계좌 개설 목적과 신분 확인을 위한 서류 제출
2. 서류 검토 과정을 거치는 것
3. 고객님의 신원 확인 절차를 진행하는 것
4. 모든 절차가 완료되면 계좌 개설이 가능합니다.
계좌 개설을 원하시는 경우, 신분증과 함께 방문해 주시면 됩니다.
"""},
]
# Tokenize and Generate Response
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to('cuda')
outputs = model.generate(
input_ids,
max_new_tokens=500,
do_sample=True,
)
# Decode and Print the Translation
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
|
{"license": "apache-2.0"}
|
task
|
[
"TRANSLATION"
] | 42,277 |
sooh098/mt5-small-finetuned-amazon-en-es
|
sooh098
|
summarization
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-10-02T15:09:20Z |
2023-10-02T15:52:25+00:00
| 20 | 0 |
---
base_model: google/mt5-small
license: apache-2.0
metrics:
- rouge
tags:
- summarization
- generated_from_trainer
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 0.0 | 1.0 | 1209 | nan | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 2.0 | 2418 | nan | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 3.0 | 3627 | nan | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 4.0 | 4836 | nan | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 5.0 | 6045 | nan | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 6.0 | 7254 | nan | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 7.0 | 8463 | nan | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 8.0 | 9672 | nan | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 0.0 | 1.0 | 1209 | nan | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 2.0 | 2418 | nan | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 3.0 | 3627 | nan | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 4.0 | 4836 | nan | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 5.0 | 6045 | nan | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 6.0 | 7254 | nan | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 7.0 | 8463 | nan | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 8.0 | 9672 | nan | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
{"base_model": "google/mt5-small", "license": "apache-2.0", "metrics": ["rouge"], "tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "mt5-small-finetuned-amazon-en-es", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 42,278 |
sharpbai/llama-7b-hf
|
sharpbai
|
text-generation
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-06-14T16:38:42Z |
2023-06-28T16:13:52+00:00
| 51 | 0 |
---
license: other
---
# llama-7b-hf
*The weight file is split into chunks with a size of 405MB for convenient and fast parallel downloads*
A 405MB split weight version of [yahma/llama-7b-hf](https://huggingface.co/yahma/llama-7b-hf)
The original model card is down below
-----------------------------------------
LLaMA-7B converted to work with git head Transformers/HuggingFace on April 8, 2023. This version should resolve the EOS token issues.
This is under a special license, please see the LICENSE file for details.
This contains the weights for the LLaMA-7b model. This model is under a non-commercial license (see the LICENSE file).
You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform?usp=send_form) but either lost your copy of the weights or got some trouble converting them to the Transformers format.
--
license: other
---
# LLaMA Model Card
## Model details
**Organization developing the model**
The FAIR team of Meta AI.
**Model date**
LLaMA was trained between December. 2022 and Feb. 2023.
**Model version**
This is version 1 of the model.
**Model type**
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
**Paper or resources for more information**
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/.
**Citations details**
https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
**License**
Non-commercial bespoke license
**Where to send questions or comments about the model**
Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of LLaMA is research on large language models, including:
exploring potential applications such as question answering, natural language understanding or reading comprehension,
understanding capabilities and limitations of current language models, and developing techniques to improve those,
evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
**Evaluation factors**
As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model.
## Metrics
**Model performance measures**
We use the following measure to evaluate the model:
- Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs,
- Exact match for question answering,
- The toxicity score from Perspective API on RealToxicityPrompts.
**Decision thresholds**
Not applicable.
**Approaches to uncertainty and variability**
Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training.
## Evaluation datasets
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs.
## Training dataset
The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
Hyperparameters for the model architecture
<table>
<thead>
<tr>
<th >LLaMA</th> <th colspan=6>Model hyper parameters </th>
</tr>
<tr>
<th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
<tr>
<th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
</tbody>
</table>
*Table 1 - Summary of LLama Model Hyperparameters*
We present our results on eight standard common sense reasoning benchmarks in the table below.
<table>
<thead>
<tr>
<th>LLaMA</th> <th colspan=9>Reasoning tasks </th>
</tr>
<tr>
<th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
</th>
<tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
</th>
<tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
</th>
<tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
</tbody>
</table>
*Table 2 - Summary of LLama Model Performance on Reasoning tasks*
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
| No | Category | FAIR LLM |
| --- | -------------------- | -------- |
| 1 | Gender | 70.6 |
| 2 | Religion | 79 |
| 3 | Race/Color | 57 |
| 4 | Sexual orientation | 81 |
| 5 | Age | 70.1 |
| 6 | Nationality | 64.2 |
| 7 | Disability | 66.7 |
| 8 | Physical appearance | 77.8 |
| 9 | Socioeconomic status | 71.5 |
| | LLaMA Average | 66.6 |
*Table 3 - Summary bias of our model output*
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Mitigations**
We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
| null |
Non_BioNLP
|
# llama-7b-hf
*The weight file is split into chunks with a size of 405MB for convenient and fast parallel downloads*
A 405MB split weight version of [yahma/llama-7b-hf](https://huggingface.co/yahma/llama-7b-hf)
The original model card is down below
-----------------------------------------
LLaMA-7B converted to work with git head Transformers/HuggingFace on April 8, 2023. This version should resolve the EOS token issues.
This is under a special license, please see the LICENSE file for details.
This contains the weights for the LLaMA-7b model. This model is under a non-commercial license (see the LICENSE file).
You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform?usp=send_form) but either lost your copy of the weights or got some trouble converting them to the Transformers format.
--
license: other
---
# LLaMA Model Card
## Model details
**Organization developing the model**
The FAIR team of Meta AI.
**Model date**
LLaMA was trained between December. 2022 and Feb. 2023.
**Model version**
This is version 1 of the model.
**Model type**
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
**Paper or resources for more information**
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/.
**Citations details**
https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
**License**
Non-commercial bespoke license
**Where to send questions or comments about the model**
Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of LLaMA is research on large language models, including:
exploring potential applications such as question answering, natural language understanding or reading comprehension,
understanding capabilities and limitations of current language models, and developing techniques to improve those,
evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
**Evaluation factors**
As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model.
## Metrics
**Model performance measures**
We use the following measure to evaluate the model:
- Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs,
- Exact match for question answering,
- The toxicity score from Perspective API on RealToxicityPrompts.
**Decision thresholds**
Not applicable.
**Approaches to uncertainty and variability**
Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training.
## Evaluation datasets
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs.
## Training dataset
The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
Hyperparameters for the model architecture
<table>
<thead>
<tr>
<th >LLaMA</th> <th colspan=6>Model hyper parameters </th>
</tr>
<tr>
<th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
<tr>
<th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
</tbody>
</table>
*Table 1 - Summary of LLama Model Hyperparameters*
We present our results on eight standard common sense reasoning benchmarks in the table below.
<table>
<thead>
<tr>
<th>LLaMA</th> <th colspan=9>Reasoning tasks </th>
</tr>
<tr>
<th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
</th>
<tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
</th>
<tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
</th>
<tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
</tbody>
</table>
*Table 2 - Summary of LLama Model Performance on Reasoning tasks*
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
| No | Category | FAIR LLM |
| --- | -------------------- | -------- |
| 1 | Gender | 70.6 |
| 2 | Religion | 79 |
| 3 | Race/Color | 57 |
| 4 | Sexual orientation | 81 |
| 5 | Age | 70.1 |
| 6 | Nationality | 64.2 |
| 7 | Disability | 66.7 |
| 8 | Physical appearance | 77.8 |
| 9 | Socioeconomic status | 71.5 |
| | LLaMA Average | 66.6 |
*Table 3 - Summary bias of our model output*
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Mitigations**
We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
|
{"license": "other"}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,279 |
SebastianRuff/flo
|
SebastianRuff
|
image-text-to-text
|
[
"pytorch",
"florence2",
"vision",
"image-text-to-text",
"custom_code",
"arxiv:2311.06242",
"license:mit",
"region:us"
] | 2025-01-05T18:49:54Z |
2025-01-05T18:49:54+00:00
| 8 | 0 |
---
license: mit
license_link: https://huggingface.co/microsoft/Florence-2-large/resolve/main/LICENSE
pipeline_tag: image-text-to-text
tags:
- vision
---
# Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks
## Model Summary
**This is a continued pretrained version of Florence-2-large model with 4k context length, only 0.1B samples are used for continue pretraining, thus it might not be trained well. In addition, OCR task has been updated with line separator ('\n'). COCO OD AP 39.8**
This Hub repository contains a HuggingFace's `transformers` implementation of Florence-2 model from Microsoft.
Florence-2 is an advanced vision foundation model that uses a prompt-based approach to handle a wide range of vision and vision-language tasks. Florence-2 can interpret simple text prompts to perform tasks like captioning, object detection, and segmentation. It leverages our FLD-5B dataset, containing 5.4 billion annotations across 126 million images, to master multi-task learning. The model's sequence-to-sequence architecture enables it to excel in both zero-shot and fine-tuned settings, proving to be a competitive vision foundation model.
Resources and Technical Documentation:
+ [Florence-2 technical report](https://arxiv.org/abs/2311.06242).
+ [Jupyter Notebook for inference and visualization of Florence-2-large](https://huggingface.co/microsoft/Florence-2-large/blob/main/sample_inference.ipynb)
| Model | Model size | Model Description |
| ------- | ------------- | ------------- |
| Florence-2-base[[HF]](https://huggingface.co/microsoft/Florence-2-base) | 0.23B | Pretrained model with FLD-5B
| Florence-2-large[[HF]](https://huggingface.co/microsoft/Florence-2-large) | 0.77B | Pretrained model with FLD-5B
| Florence-2-base-ft[[HF]](https://huggingface.co/microsoft/Florence-2-base-ft) | 0.23B | Finetuned model on a colletion of downstream tasks
| Florence-2-large-ft[[HF]](https://huggingface.co/microsoft/Florence-2-large-ft) | 0.77B | Finetuned model on a colletion of downstream tasks
## How to Get Started with the Model
Use the code below to get started with the model. All models are trained with float16.
```python
import requests
import torch
from PIL import Image
from transformers import AutoProcessor, AutoModelForCausalLM
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-large", torch_dtype=torch_dtype, trust_remote_code=True).to(device)
processor = AutoProcessor.from_pretrained("microsoft/Florence-2-large", trust_remote_code=True)
prompt = "<OD>"
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=prompt, images=image, return_tensors="pt").to(device, torch_dtype)
generated_ids = model.generate(
input_ids=inputs["input_ids"],
pixel_values=inputs["pixel_values"],
max_new_tokens=4096,
num_beams=3,
do_sample=False
)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]
parsed_answer = processor.post_process_generation(generated_text, task="<OD>", image_size=(image.width, image.height))
print(parsed_answer)
```
## Tasks
This model is capable of performing different tasks through changing the prompts.
First, let's define a function to run a prompt.
<details>
<summary> Click to expand </summary>
```python
import requests
import torch
from PIL import Image
from transformers import AutoProcessor, AutoModelForCausalLM
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-large", torch_dtype=torch_dtype, trust_remote_code=True).to(device)
processor = AutoProcessor.from_pretrained("microsoft/Florence-2-large", trust_remote_code=True)
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
def run_example(task_prompt, text_input=None):
if text_input is None:
prompt = task_prompt
else:
prompt = task_prompt + text_input
inputs = processor(text=prompt, images=image, return_tensors="pt").to(device, torch_dtype)
generated_ids = model.generate(
input_ids=inputs["input_ids"],
pixel_values=inputs["pixel_values"],
max_new_tokens=1024,
num_beams=3
)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]
parsed_answer = processor.post_process_generation(generated_text, task=task_prompt, image_size=(image.width, image.height))
print(parsed_answer)
```
</details>
Here are the tasks `Florence-2` could perform:
<details>
<summary> Click to expand </summary>
### Caption
```python
prompt = "<CAPTION>"
run_example(prompt)
```
### Detailed Caption
```python
prompt = "<DETAILED_CAPTION>"
run_example(prompt)
```
### More Detailed Caption
```python
prompt = "<MORE_DETAILED_CAPTION>"
run_example(prompt)
```
### Caption to Phrase Grounding
caption to phrase grounding task requires additional text input, i.e. caption.
Caption to phrase grounding results format:
{'\<CAPTION_TO_PHRASE_GROUNDING>': {'bboxes': [[x1, y1, x2, y2], ...], 'labels': ['', '', ...]}}
```python
task_prompt = "<CAPTION_TO_PHRASE_GROUNDING>"
results = run_example(task_prompt, text_input="A green car parked in front of a yellow building.")
```
### Object Detection
OD results format:
{'\<OD>': {'bboxes': [[x1, y1, x2, y2], ...],
'labels': ['label1', 'label2', ...]} }
```python
prompt = "<OD>"
run_example(prompt)
```
### Dense Region Caption
Dense region caption results format:
{'\<DENSE_REGION_CAPTION>' : {'bboxes': [[x1, y1, x2, y2], ...],
'labels': ['label1', 'label2', ...]} }
```python
prompt = "<DENSE_REGION_CAPTION>"
run_example(prompt)
```
### Region proposal
Dense region caption results format:
{'\<REGION_PROPOSAL>': {'bboxes': [[x1, y1, x2, y2], ...],
'labels': ['', '', ...]}}
```python
prompt = "<REGION_PROPOSAL>"
run_example(prompt)
```
### OCR
```python
prompt = "<OCR>"
run_example(prompt)
```
### OCR with Region
OCR with region output format:
{'\<OCR_WITH_REGION>': {'quad_boxes': [[x1, y1, x2, y2, x3, y3, x4, y4], ...], 'labels': ['text1', ...]}}
```python
prompt = "<OCR_WITH_REGION>"
run_example(prompt)
```
### Output confidence score with Object Detection
```python
def run_example_with_score(task_prompt, text_input=None):
if text_input is None:
prompt = task_prompt
else:
prompt = task_prompt + text_input
inputs = processor(text=prompt, images=image, return_tensors="pt").to(device, torch_dtype)
generated_ids = model.generate(
input_ids=inputs["input_ids"],
pixel_values=inputs["pixel_values"],
max_new_tokens=1024,
num_beams=3,
return_dict_in_generate=True,
output_scores=True,
)
generated_text = processor.batch_decode(generated_ids.sequences, skip_special_tokens=False)[0]
prediction, scores, beam_indices = generated_ids.sequences, generated_ids.scores, generated_ids.beam_indices
transition_beam_scores = model.compute_transition_scores(
sequences=prediction,
scores=scores,
beam_indices=beam_indices,
)
parsed_answer = processor.post_process_generation(sequence=generated_ids.sequences[0],
transition_beam_score=transition_beam_scores[0],
task=task_prompt, image_size=(image.width, image.height)
)
print(parsed_answer)
prompt = "<OD>"
run_example_with_score(prompt)
```
for More detailed examples, please refer to [notebook](https://huggingface.co/microsoft/Florence-2-large/blob/main/sample_inference.ipynb)
</details>
# Benchmarks
## Florence-2 Zero-shot performance
The following table presents the zero-shot performance of generalist vision foundation models on image captioning and object detection evaluation tasks. These models have not been exposed to the training data of the evaluation tasks during their training phase.
| Method | #params | COCO Cap. test CIDEr | NoCaps val CIDEr | TextCaps val CIDEr | COCO Det. val2017 mAP |
|--------|---------|----------------------|------------------|--------------------|-----------------------|
| Flamingo | 80B | 84.3 | - | - | - |
| Florence-2-base| 0.23B | 133.0 | 118.7 | 70.1 | 34.7 |
| Florence-2-large| 0.77B | 135.6 | 120.8 | 72.8 | 37.5 |
The following table continues the comparison with performance on other vision-language evaluation tasks.
| Method | Flickr30k test R@1 | Refcoco val Accuracy | Refcoco test-A Accuracy | Refcoco test-B Accuracy | Refcoco+ val Accuracy | Refcoco+ test-A Accuracy | Refcoco+ test-B Accuracy | Refcocog val Accuracy | Refcocog test Accuracy | Refcoco RES val mIoU |
|--------|----------------------|----------------------|-------------------------|-------------------------|-----------------------|--------------------------|--------------------------|-----------------------|------------------------|----------------------|
| Kosmos-2 | 78.7 | 52.3 | 57.4 | 47.3 | 45.5 | 50.7 | 42.2 | 60.6 | 61.7 | - |
| Florence-2-base | 83.6 | 53.9 | 58.4 | 49.7 | 51.5 | 56.4 | 47.9 | 66.3 | 65.1 | 34.6 |
| Florence-2-large | 84.4 | 56.3 | 61.6 | 51.4 | 53.6 | 57.9 | 49.9 | 68.0 | 67.0 | 35.8 |
## Florence-2 finetuned performance
We finetune Florence-2 models with a collection of downstream tasks, resulting two generalist models *Florence-2-base-ft* and *Florence-2-large-ft* that can conduct a wide range of downstream tasks.
The table below compares the performance of specialist and generalist models on various captioning and Visual Question Answering (VQA) tasks. Specialist models are fine-tuned specifically for each task, whereas generalist models are fine-tuned in a task-agnostic manner across all tasks. The symbol "▲" indicates the usage of external OCR as input.
| Method | # Params | COCO Caption Karpathy test CIDEr | NoCaps val CIDEr | TextCaps val CIDEr | VQAv2 test-dev Acc | TextVQA test-dev Acc | VizWiz VQA test-dev Acc |
|----------------|----------|-----------------------------------|------------------|--------------------|--------------------|----------------------|-------------------------|
| **Specialist Models** | | | | | | | |
| CoCa | 2.1B | 143.6 | 122.4 | - | 82.3 | - | - |
| BLIP-2 | 7.8B | 144.5 | 121.6 | - | 82.2 | - | - |
| GIT2 | 5.1B | 145.0 | 126.9 | 148.6 | 81.7 | 67.3 | 71.0 |
| Flamingo | 80B | 138.1 | - | - | 82.0 | 54.1 | 65.7 |
| PaLI | 17B | 149.1 | 127.0 | 160.0▲ | 84.3 | 58.8 / 73.1▲ | 71.6 / 74.4▲ |
| PaLI-X | 55B | 149.2 | 126.3 | 147.0 / 163.7▲ | 86.0 | 71.4 / 80.8▲ | 70.9 / 74.6▲ |
| **Generalist Models** | | | | | | | |
| Unified-IO | 2.9B | - | 100.0 | - | 77.9 | - | 57.4 |
| Florence-2-base-ft | 0.23B | 140.0 | 116.7 | 143.9 | 79.7 | 63.6 | 63.6 |
| Florence-2-large-ft | 0.77B | 143.3 | 124.9 | 151.1 | 81.7 | 73.5 | 72.6 |
| Method | # Params | COCO Det. val2017 mAP | Flickr30k test R@1 | RefCOCO val Accuracy | RefCOCO test-A Accuracy | RefCOCO test-B Accuracy | RefCOCO+ val Accuracy | RefCOCO+ test-A Accuracy | RefCOCO+ test-B Accuracy | RefCOCOg val Accuracy | RefCOCOg test Accuracy | RefCOCO RES val mIoU |
|----------------------|----------|-----------------------|--------------------|----------------------|-------------------------|-------------------------|------------------------|---------------------------|---------------------------|------------------------|-----------------------|------------------------|
| **Specialist Models** | | | | | | | | | | | | |
| SeqTR | - | - | - | 83.7 | 86.5 | 81.2 | 71.5 | 76.3 | 64.9 | 74.9 | 74.2 | - |
| PolyFormer | - | - | - | 90.4 | 92.9 | 87.2 | 85.0 | 89.8 | 78.0 | 85.8 | 85.9 | 76.9 |
| UNINEXT | 0.74B | 60.6 | - | 92.6 | 94.3 | 91.5 | 85.2 | 89.6 | 79.8 | 88.7 | 89.4 | - |
| Ferret | 13B | - | - | 89.5 | 92.4 | 84.4 | 82.8 | 88.1 | 75.2 | 85.8 | 86.3 | - |
| **Generalist Models** | | | | | | | | | | | | |
| UniTAB | - | - | - | 88.6 | 91.1 | 83.8 | 81.0 | 85.4 | 71.6 | 84.6 | 84.7 | - |
| Florence-2-base-ft | 0.23B | 41.4 | 84.0 | 92.6 | 94.8 | 91.5 | 86.8 | 91.7 | 82.2 | 89.8 | 82.2 | 78.0 |
| Florence-2-large-ft| 0.77B | 43.4 | 85.2 | 93.4 | 95.3 | 92.0 | 88.3 | 92.9 | 83.6 | 91.2 | 91.7 | 80.5 |
## BibTex and citation info
```
@article{xiao2023florence,
title={Florence-2: Advancing a unified representation for a variety of vision tasks},
author={Xiao, Bin and Wu, Haiping and Xu, Weijian and Dai, Xiyang and Hu, Houdong and Lu, Yumao and Zeng, Michael and Liu, Ce and Yuan, Lu},
journal={arXiv preprint arXiv:2311.06242},
year={2023}
}
```
| null |
Non_BioNLP
|
# Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks
## Model Summary
**This is a continued pretrained version of Florence-2-large model with 4k context length, only 0.1B samples are used for continue pretraining, thus it might not be trained well. In addition, OCR task has been updated with line separator ('\n'). COCO OD AP 39.8**
This Hub repository contains a HuggingFace's `transformers` implementation of Florence-2 model from Microsoft.
Florence-2 is an advanced vision foundation model that uses a prompt-based approach to handle a wide range of vision and vision-language tasks. Florence-2 can interpret simple text prompts to perform tasks like captioning, object detection, and segmentation. It leverages our FLD-5B dataset, containing 5.4 billion annotations across 126 million images, to master multi-task learning. The model's sequence-to-sequence architecture enables it to excel in both zero-shot and fine-tuned settings, proving to be a competitive vision foundation model.
Resources and Technical Documentation:
+ [Florence-2 technical report](https://arxiv.org/abs/2311.06242).
+ [Jupyter Notebook for inference and visualization of Florence-2-large](https://huggingface.co/microsoft/Florence-2-large/blob/main/sample_inference.ipynb)
| Model | Model size | Model Description |
| ------- | ------------- | ------------- |
| Florence-2-base[[HF]](https://huggingface.co/microsoft/Florence-2-base) | 0.23B | Pretrained model with FLD-5B
| Florence-2-large[[HF]](https://huggingface.co/microsoft/Florence-2-large) | 0.77B | Pretrained model with FLD-5B
| Florence-2-base-ft[[HF]](https://huggingface.co/microsoft/Florence-2-base-ft) | 0.23B | Finetuned model on a colletion of downstream tasks
| Florence-2-large-ft[[HF]](https://huggingface.co/microsoft/Florence-2-large-ft) | 0.77B | Finetuned model on a colletion of downstream tasks
## How to Get Started with the Model
Use the code below to get started with the model. All models are trained with float16.
```python
import requests
import torch
from PIL import Image
from transformers import AutoProcessor, AutoModelForCausalLM
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-large", torch_dtype=torch_dtype, trust_remote_code=True).to(device)
processor = AutoProcessor.from_pretrained("microsoft/Florence-2-large", trust_remote_code=True)
prompt = "<OD>"
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=prompt, images=image, return_tensors="pt").to(device, torch_dtype)
generated_ids = model.generate(
input_ids=inputs["input_ids"],
pixel_values=inputs["pixel_values"],
max_new_tokens=4096,
num_beams=3,
do_sample=False
)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]
parsed_answer = processor.post_process_generation(generated_text, task="<OD>", image_size=(image.width, image.height))
print(parsed_answer)
```
## Tasks
This model is capable of performing different tasks through changing the prompts.
First, let's define a function to run a prompt.
<details>
<summary> Click to expand </summary>
```python
import requests
import torch
from PIL import Image
from transformers import AutoProcessor, AutoModelForCausalLM
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-large", torch_dtype=torch_dtype, trust_remote_code=True).to(device)
processor = AutoProcessor.from_pretrained("microsoft/Florence-2-large", trust_remote_code=True)
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
def run_example(task_prompt, text_input=None):
if text_input is None:
prompt = task_prompt
else:
prompt = task_prompt + text_input
inputs = processor(text=prompt, images=image, return_tensors="pt").to(device, torch_dtype)
generated_ids = model.generate(
input_ids=inputs["input_ids"],
pixel_values=inputs["pixel_values"],
max_new_tokens=1024,
num_beams=3
)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]
parsed_answer = processor.post_process_generation(generated_text, task=task_prompt, image_size=(image.width, image.height))
print(parsed_answer)
```
</details>
Here are the tasks `Florence-2` could perform:
<details>
<summary> Click to expand </summary>
### Caption
```python
prompt = "<CAPTION>"
run_example(prompt)
```
### Detailed Caption
```python
prompt = "<DETAILED_CAPTION>"
run_example(prompt)
```
### More Detailed Caption
```python
prompt = "<MORE_DETAILED_CAPTION>"
run_example(prompt)
```
### Caption to Phrase Grounding
caption to phrase grounding task requires additional text input, i.e. caption.
Caption to phrase grounding results format:
{'\<CAPTION_TO_PHRASE_GROUNDING>': {'bboxes': [[x1, y1, x2, y2], ...], 'labels': ['', '', ...]}}
```python
task_prompt = "<CAPTION_TO_PHRASE_GROUNDING>"
results = run_example(task_prompt, text_input="A green car parked in front of a yellow building.")
```
### Object Detection
OD results format:
{'\<OD>': {'bboxes': [[x1, y1, x2, y2], ...],
'labels': ['label1', 'label2', ...]} }
```python
prompt = "<OD>"
run_example(prompt)
```
### Dense Region Caption
Dense region caption results format:
{'\<DENSE_REGION_CAPTION>' : {'bboxes': [[x1, y1, x2, y2], ...],
'labels': ['label1', 'label2', ...]} }
```python
prompt = "<DENSE_REGION_CAPTION>"
run_example(prompt)
```
### Region proposal
Dense region caption results format:
{'\<REGION_PROPOSAL>': {'bboxes': [[x1, y1, x2, y2], ...],
'labels': ['', '', ...]}}
```python
prompt = "<REGION_PROPOSAL>"
run_example(prompt)
```
### OCR
```python
prompt = "<OCR>"
run_example(prompt)
```
### OCR with Region
OCR with region output format:
{'\<OCR_WITH_REGION>': {'quad_boxes': [[x1, y1, x2, y2, x3, y3, x4, y4], ...], 'labels': ['text1', ...]}}
```python
prompt = "<OCR_WITH_REGION>"
run_example(prompt)
```
### Output confidence score with Object Detection
```python
def run_example_with_score(task_prompt, text_input=None):
if text_input is None:
prompt = task_prompt
else:
prompt = task_prompt + text_input
inputs = processor(text=prompt, images=image, return_tensors="pt").to(device, torch_dtype)
generated_ids = model.generate(
input_ids=inputs["input_ids"],
pixel_values=inputs["pixel_values"],
max_new_tokens=1024,
num_beams=3,
return_dict_in_generate=True,
output_scores=True,
)
generated_text = processor.batch_decode(generated_ids.sequences, skip_special_tokens=False)[0]
prediction, scores, beam_indices = generated_ids.sequences, generated_ids.scores, generated_ids.beam_indices
transition_beam_scores = model.compute_transition_scores(
sequences=prediction,
scores=scores,
beam_indices=beam_indices,
)
parsed_answer = processor.post_process_generation(sequence=generated_ids.sequences[0],
transition_beam_score=transition_beam_scores[0],
task=task_prompt, image_size=(image.width, image.height)
)
print(parsed_answer)
prompt = "<OD>"
run_example_with_score(prompt)
```
for More detailed examples, please refer to [notebook](https://huggingface.co/microsoft/Florence-2-large/blob/main/sample_inference.ipynb)
</details>
# Benchmarks
## Florence-2 Zero-shot performance
The following table presents the zero-shot performance of generalist vision foundation models on image captioning and object detection evaluation tasks. These models have not been exposed to the training data of the evaluation tasks during their training phase.
| Method | #params | COCO Cap. test CIDEr | NoCaps val CIDEr | TextCaps val CIDEr | COCO Det. val2017 mAP |
|--------|---------|----------------------|------------------|--------------------|-----------------------|
| Flamingo | 80B | 84.3 | - | - | - |
| Florence-2-base| 0.23B | 133.0 | 118.7 | 70.1 | 34.7 |
| Florence-2-large| 0.77B | 135.6 | 120.8 | 72.8 | 37.5 |
The following table continues the comparison with performance on other vision-language evaluation tasks.
| Method | Flickr30k test R@1 | Refcoco val Accuracy | Refcoco test-A Accuracy | Refcoco test-B Accuracy | Refcoco+ val Accuracy | Refcoco+ test-A Accuracy | Refcoco+ test-B Accuracy | Refcocog val Accuracy | Refcocog test Accuracy | Refcoco RES val mIoU |
|--------|----------------------|----------------------|-------------------------|-------------------------|-----------------------|--------------------------|--------------------------|-----------------------|------------------------|----------------------|
| Kosmos-2 | 78.7 | 52.3 | 57.4 | 47.3 | 45.5 | 50.7 | 42.2 | 60.6 | 61.7 | - |
| Florence-2-base | 83.6 | 53.9 | 58.4 | 49.7 | 51.5 | 56.4 | 47.9 | 66.3 | 65.1 | 34.6 |
| Florence-2-large | 84.4 | 56.3 | 61.6 | 51.4 | 53.6 | 57.9 | 49.9 | 68.0 | 67.0 | 35.8 |
## Florence-2 finetuned performance
We finetune Florence-2 models with a collection of downstream tasks, resulting two generalist models *Florence-2-base-ft* and *Florence-2-large-ft* that can conduct a wide range of downstream tasks.
The table below compares the performance of specialist and generalist models on various captioning and Visual Question Answering (VQA) tasks. Specialist models are fine-tuned specifically for each task, whereas generalist models are fine-tuned in a task-agnostic manner across all tasks. The symbol "▲" indicates the usage of external OCR as input.
| Method | # Params | COCO Caption Karpathy test CIDEr | NoCaps val CIDEr | TextCaps val CIDEr | VQAv2 test-dev Acc | TextVQA test-dev Acc | VizWiz VQA test-dev Acc |
|----------------|----------|-----------------------------------|------------------|--------------------|--------------------|----------------------|-------------------------|
| **Specialist Models** | | | | | | | |
| CoCa | 2.1B | 143.6 | 122.4 | - | 82.3 | - | - |
| BLIP-2 | 7.8B | 144.5 | 121.6 | - | 82.2 | - | - |
| GIT2 | 5.1B | 145.0 | 126.9 | 148.6 | 81.7 | 67.3 | 71.0 |
| Flamingo | 80B | 138.1 | - | - | 82.0 | 54.1 | 65.7 |
| PaLI | 17B | 149.1 | 127.0 | 160.0▲ | 84.3 | 58.8 / 73.1▲ | 71.6 / 74.4▲ |
| PaLI-X | 55B | 149.2 | 126.3 | 147.0 / 163.7▲ | 86.0 | 71.4 / 80.8▲ | 70.9 / 74.6▲ |
| **Generalist Models** | | | | | | | |
| Unified-IO | 2.9B | - | 100.0 | - | 77.9 | - | 57.4 |
| Florence-2-base-ft | 0.23B | 140.0 | 116.7 | 143.9 | 79.7 | 63.6 | 63.6 |
| Florence-2-large-ft | 0.77B | 143.3 | 124.9 | 151.1 | 81.7 | 73.5 | 72.6 |
| Method | # Params | COCO Det. val2017 mAP | Flickr30k test R@1 | RefCOCO val Accuracy | RefCOCO test-A Accuracy | RefCOCO test-B Accuracy | RefCOCO+ val Accuracy | RefCOCO+ test-A Accuracy | RefCOCO+ test-B Accuracy | RefCOCOg val Accuracy | RefCOCOg test Accuracy | RefCOCO RES val mIoU |
|----------------------|----------|-----------------------|--------------------|----------------------|-------------------------|-------------------------|------------------------|---------------------------|---------------------------|------------------------|-----------------------|------------------------|
| **Specialist Models** | | | | | | | | | | | | |
| SeqTR | - | - | - | 83.7 | 86.5 | 81.2 | 71.5 | 76.3 | 64.9 | 74.9 | 74.2 | - |
| PolyFormer | - | - | - | 90.4 | 92.9 | 87.2 | 85.0 | 89.8 | 78.0 | 85.8 | 85.9 | 76.9 |
| UNINEXT | 0.74B | 60.6 | - | 92.6 | 94.3 | 91.5 | 85.2 | 89.6 | 79.8 | 88.7 | 89.4 | - |
| Ferret | 13B | - | - | 89.5 | 92.4 | 84.4 | 82.8 | 88.1 | 75.2 | 85.8 | 86.3 | - |
| **Generalist Models** | | | | | | | | | | | | |
| UniTAB | - | - | - | 88.6 | 91.1 | 83.8 | 81.0 | 85.4 | 71.6 | 84.6 | 84.7 | - |
| Florence-2-base-ft | 0.23B | 41.4 | 84.0 | 92.6 | 94.8 | 91.5 | 86.8 | 91.7 | 82.2 | 89.8 | 82.2 | 78.0 |
| Florence-2-large-ft| 0.77B | 43.4 | 85.2 | 93.4 | 95.3 | 92.0 | 88.3 | 92.9 | 83.6 | 91.2 | 91.7 | 80.5 |
## BibTex and citation info
```
@article{xiao2023florence,
title={Florence-2: Advancing a unified representation for a variety of vision tasks},
author={Xiao, Bin and Wu, Haiping and Xu, Weijian and Dai, Xiyang and Hu, Houdong and Lu, Yumao and Zeng, Michael and Liu, Ce and Yuan, Lu},
journal={arXiv preprint arXiv:2311.06242},
year={2023}
}
```
|
{"license": "mit", "license_link": "https://huggingface.co/microsoft/Florence-2-large/resolve/main/LICENSE", "pipeline_tag": "image-text-to-text", "tags": ["vision"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,280 |
gaudi/opus-mt-jap-en-ctranslate2
|
gaudi
|
translation
|
[
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-07-17T00:12:03Z |
2024-10-18T22:19:37+00:00
| 8 | 0 |
---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-jap-en)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-jap-en).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-jap-en --output_dir ./ctranslate2/opus-mt-jap-en-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-jap-en-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-jap-en-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-jap-en-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-jap-en) by Helsinki-NLP.
| null |
Non_BioNLP
|
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-jap-en)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-jap-en).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-jap-en --output_dir ./ctranslate2/opus-mt-jap-en-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-jap-en-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-jap-en-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-jap-en-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-jap-en) by Helsinki-NLP.
|
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
|
task
|
[
"TRANSLATION"
] | 42,281 |
saraestevez/setfit-minilm-bank-tweets-processed-200
|
saraestevez
|
text-classification
|
[
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"model-index",
"region:us"
] | 2024-08-07T15:24:57Z |
2024-08-07T15:25:17+00:00
| 50 | 0 |
---
base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: Para saber si un negocio va a funcionar, es necesario realizar un estudio
de mercado, valorar la economía local durante un año, considerar la afluencia
de personas y la ubicación, así como determinar el tamaño de la inversión.
- text: Apoyo la opinión de Tyrexito y también reclamo al Banco Sabadell por sus comisiones.
- text: Los resultados del Banco Sabadell impulsan al IBEX 35.
- text: Aunque no pude retirar el bono de festividad en el cajero, ING y AKBANK rechazaron
mis quejas, pero tras anunciar una denuncia, me transfirieron el dinero en una
hora; si tienes razón, no te rindas.
- text: El Gobierno presentará al nuevo gobernador del Banco de España en una Comisión
del Congreso este jueves.
inference: true
model-index:
- name: SetFit with sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.7739130434782608
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 128 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:---------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| relevant | <ul><li>'Nuevo caso de phishing relacionado con Abanca, registrado el 23 de julio de 2024, con la URL: /www.inicio-abanca.com/es/WELE200M_Logon_Ini.aspx.'</li><li>'Una alumna que trabajó en Bancomer reveló un esquema de robo en el que dos cajeros afirmaban que un cliente había depositado mil pesos en un pago de dos mil y se quedaban con la mitad cada uno.'</li><li>'Las previsiones de crecimiento de España para 2024 han mejorado según diversas organizaciones, con estimaciones que oscilan entre el 1,8% y el 2,4%, impulsadas por turismo, exportaciones y trabajadores extranjeros.'</li></ul> |
| discard | <ul><li>'Banco Santander ofrece una cuenta en línea sin comisiones y un bono de 400€ por domiciliar tu nómina.'</li><li>'El BBVA fue el banco que peor me trató al tener que contratar productos innecesarios para conseguir mi primera hipoteca de funcionario.'</li><li>'CaixaBank se destaca como líder del sector bancario gracias a su sólido crecimiento y eficiencia operativa, convirtiéndose en una opción atractiva para inversores.'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.7739 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("saraestevez/setfit-minilm-bank-tweets-processed-200")
# Run inference
preds = model("Los resultados del Banco Sabadell impulsan al IBEX 35.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 1 | 21.3275 | 41 |
| Label | Training Sample Count |
|:---------|:----------------------|
| discard | 200 |
| relevant | 200 |
### Training Hyperparameters
- batch_size: (16, 2)
- num_epochs: (1, 16)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0002 | 1 | 0.4199 | - |
| 0.0100 | 50 | 0.3357 | - |
| 0.0199 | 100 | 0.3198 | - |
| 0.0299 | 150 | 0.2394 | - |
| 0.0398 | 200 | 0.2411 | - |
| 0.0498 | 250 | 0.2277 | - |
| 0.0597 | 300 | 0.1876 | - |
| 0.0697 | 350 | 0.1481 | - |
| 0.0796 | 400 | 0.1533 | - |
| 0.0896 | 450 | 0.0145 | - |
| 0.0995 | 500 | 0.0113 | - |
| 0.1095 | 550 | 0.0045 | - |
| 0.1194 | 600 | 0.0201 | - |
| 0.1294 | 650 | 0.0008 | - |
| 0.1393 | 700 | 0.0003 | - |
| 0.1493 | 750 | 0.0003 | - |
| 0.1592 | 800 | 0.0003 | - |
| 0.1692 | 850 | 0.0001 | - |
| 0.1791 | 900 | 0.0001 | - |
| 0.1891 | 950 | 0.0001 | - |
| 0.1990 | 1000 | 0.0001 | - |
| 0.2090 | 1050 | 0.0001 | - |
| 0.2189 | 1100 | 0.0002 | - |
| 0.2289 | 1150 | 0.0001 | - |
| 0.2388 | 1200 | 0.0001 | - |
| 0.2488 | 1250 | 0.0001 | - |
| 0.2587 | 1300 | 0.0 | - |
| 0.2687 | 1350 | 0.0001 | - |
| 0.2786 | 1400 | 0.0001 | - |
| 0.2886 | 1450 | 0.0001 | - |
| 0.2985 | 1500 | 0.0 | - |
| 0.3085 | 1550 | 0.0001 | - |
| 0.3184 | 1600 | 0.0 | - |
| 0.3284 | 1650 | 0.0 | - |
| 0.3383 | 1700 | 0.0 | - |
| 0.3483 | 1750 | 0.0001 | - |
| 0.3582 | 1800 | 0.0 | - |
| 0.3682 | 1850 | 0.0 | - |
| 0.3781 | 1900 | 0.0 | - |
| 0.3881 | 1950 | 0.0 | - |
| 0.3980 | 2000 | 0.0 | - |
| 0.4080 | 2050 | 0.0 | - |
| 0.4179 | 2100 | 0.0 | - |
| 0.4279 | 2150 | 0.0 | - |
| 0.4378 | 2200 | 0.0 | - |
| 0.4478 | 2250 | 0.0 | - |
| 0.4577 | 2300 | 0.0 | - |
| 0.4677 | 2350 | 0.0 | - |
| 0.4776 | 2400 | 0.0 | - |
| 0.4876 | 2450 | 0.0 | - |
| 0.4975 | 2500 | 0.0 | - |
| 0.5075 | 2550 | 0.0 | - |
| 0.5174 | 2600 | 0.0 | - |
| 0.5274 | 2650 | 0.0 | - |
| 0.5373 | 2700 | 0.0 | - |
| 0.5473 | 2750 | 0.0 | - |
| 0.5572 | 2800 | 0.0 | - |
| 0.5672 | 2850 | 0.0 | - |
| 0.5771 | 2900 | 0.0 | - |
| 0.5871 | 2950 | 0.0 | - |
| 0.5970 | 3000 | 0.0 | - |
| 0.6070 | 3050 | 0.0 | - |
| 0.6169 | 3100 | 0.0 | - |
| 0.6269 | 3150 | 0.0 | - |
| 0.6368 | 3200 | 0.0 | - |
| 0.6468 | 3250 | 0.0 | - |
| 0.6567 | 3300 | 0.0 | - |
| 0.6667 | 3350 | 0.0 | - |
| 0.6766 | 3400 | 0.0 | - |
| 0.6866 | 3450 | 0.0 | - |
| 0.6965 | 3500 | 0.0 | - |
| 0.7065 | 3550 | 0.0 | - |
| 0.7164 | 3600 | 0.0 | - |
| 0.7264 | 3650 | 0.0 | - |
| 0.7363 | 3700 | 0.0 | - |
| 0.7463 | 3750 | 0.0 | - |
| 0.7562 | 3800 | 0.0 | - |
| 0.7662 | 3850 | 0.0 | - |
| 0.7761 | 3900 | 0.0 | - |
| 0.7861 | 3950 | 0.0 | - |
| 0.7960 | 4000 | 0.0 | - |
| 0.8060 | 4050 | 0.0 | - |
| 0.8159 | 4100 | 0.0 | - |
| 0.8259 | 4150 | 0.0 | - |
| 0.8358 | 4200 | 0.0 | - |
| 0.8458 | 4250 | 0.0 | - |
| 0.8557 | 4300 | 0.0 | - |
| 0.8657 | 4350 | 0.0 | - |
| 0.8756 | 4400 | 0.0 | - |
| 0.8856 | 4450 | 0.0 | - |
| 0.8955 | 4500 | 0.0 | - |
| 0.9055 | 4550 | 0.0 | - |
| 0.9154 | 4600 | 0.0 | - |
| 0.9254 | 4650 | 0.0 | - |
| 0.9353 | 4700 | 0.0 | - |
| 0.9453 | 4750 | 0.0 | - |
| 0.9552 | 4800 | 0.0 | - |
| 0.9652 | 4850 | 0.0 | - |
| 0.9751 | 4900 | 0.0 | - |
| 0.9851 | 4950 | 0.0 | - |
| 0.9950 | 5000 | 0.0 | - |
### Framework Versions
- Python: 3.11.0rc1
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- Transformers: 4.39.0
- PyTorch: 2.3.1+cu121
- Datasets: 2.19.1
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SetFit with sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 128 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:---------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| relevant | <ul><li>'Nuevo caso de phishing relacionado con Abanca, registrado el 23 de julio de 2024, con la URL: /www.inicio-abanca.com/es/WELE200M_Logon_Ini.aspx.'</li><li>'Una alumna que trabajó en Bancomer reveló un esquema de robo en el que dos cajeros afirmaban que un cliente había depositado mil pesos en un pago de dos mil y se quedaban con la mitad cada uno.'</li><li>'Las previsiones de crecimiento de España para 2024 han mejorado según diversas organizaciones, con estimaciones que oscilan entre el 1,8% y el 2,4%, impulsadas por turismo, exportaciones y trabajadores extranjeros.'</li></ul> |
| discard | <ul><li>'Banco Santander ofrece una cuenta en línea sin comisiones y un bono de 400€ por domiciliar tu nómina.'</li><li>'El BBVA fue el banco que peor me trató al tener que contratar productos innecesarios para conseguir mi primera hipoteca de funcionario.'</li><li>'CaixaBank se destaca como líder del sector bancario gracias a su sólido crecimiento y eficiencia operativa, convirtiéndose en una opción atractiva para inversores.'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.7739 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("saraestevez/setfit-minilm-bank-tweets-processed-200")
# Run inference
preds = model("Los resultados del Banco Sabadell impulsan al IBEX 35.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 1 | 21.3275 | 41 |
| Label | Training Sample Count |
|:---------|:----------------------|
| discard | 200 |
| relevant | 200 |
### Training Hyperparameters
- batch_size: (16, 2)
- num_epochs: (1, 16)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0002 | 1 | 0.4199 | - |
| 0.0100 | 50 | 0.3357 | - |
| 0.0199 | 100 | 0.3198 | - |
| 0.0299 | 150 | 0.2394 | - |
| 0.0398 | 200 | 0.2411 | - |
| 0.0498 | 250 | 0.2277 | - |
| 0.0597 | 300 | 0.1876 | - |
| 0.0697 | 350 | 0.1481 | - |
| 0.0796 | 400 | 0.1533 | - |
| 0.0896 | 450 | 0.0145 | - |
| 0.0995 | 500 | 0.0113 | - |
| 0.1095 | 550 | 0.0045 | - |
| 0.1194 | 600 | 0.0201 | - |
| 0.1294 | 650 | 0.0008 | - |
| 0.1393 | 700 | 0.0003 | - |
| 0.1493 | 750 | 0.0003 | - |
| 0.1592 | 800 | 0.0003 | - |
| 0.1692 | 850 | 0.0001 | - |
| 0.1791 | 900 | 0.0001 | - |
| 0.1891 | 950 | 0.0001 | - |
| 0.1990 | 1000 | 0.0001 | - |
| 0.2090 | 1050 | 0.0001 | - |
| 0.2189 | 1100 | 0.0002 | - |
| 0.2289 | 1150 | 0.0001 | - |
| 0.2388 | 1200 | 0.0001 | - |
| 0.2488 | 1250 | 0.0001 | - |
| 0.2587 | 1300 | 0.0 | - |
| 0.2687 | 1350 | 0.0001 | - |
| 0.2786 | 1400 | 0.0001 | - |
| 0.2886 | 1450 | 0.0001 | - |
| 0.2985 | 1500 | 0.0 | - |
| 0.3085 | 1550 | 0.0001 | - |
| 0.3184 | 1600 | 0.0 | - |
| 0.3284 | 1650 | 0.0 | - |
| 0.3383 | 1700 | 0.0 | - |
| 0.3483 | 1750 | 0.0001 | - |
| 0.3582 | 1800 | 0.0 | - |
| 0.3682 | 1850 | 0.0 | - |
| 0.3781 | 1900 | 0.0 | - |
| 0.3881 | 1950 | 0.0 | - |
| 0.3980 | 2000 | 0.0 | - |
| 0.4080 | 2050 | 0.0 | - |
| 0.4179 | 2100 | 0.0 | - |
| 0.4279 | 2150 | 0.0 | - |
| 0.4378 | 2200 | 0.0 | - |
| 0.4478 | 2250 | 0.0 | - |
| 0.4577 | 2300 | 0.0 | - |
| 0.4677 | 2350 | 0.0 | - |
| 0.4776 | 2400 | 0.0 | - |
| 0.4876 | 2450 | 0.0 | - |
| 0.4975 | 2500 | 0.0 | - |
| 0.5075 | 2550 | 0.0 | - |
| 0.5174 | 2600 | 0.0 | - |
| 0.5274 | 2650 | 0.0 | - |
| 0.5373 | 2700 | 0.0 | - |
| 0.5473 | 2750 | 0.0 | - |
| 0.5572 | 2800 | 0.0 | - |
| 0.5672 | 2850 | 0.0 | - |
| 0.5771 | 2900 | 0.0 | - |
| 0.5871 | 2950 | 0.0 | - |
| 0.5970 | 3000 | 0.0 | - |
| 0.6070 | 3050 | 0.0 | - |
| 0.6169 | 3100 | 0.0 | - |
| 0.6269 | 3150 | 0.0 | - |
| 0.6368 | 3200 | 0.0 | - |
| 0.6468 | 3250 | 0.0 | - |
| 0.6567 | 3300 | 0.0 | - |
| 0.6667 | 3350 | 0.0 | - |
| 0.6766 | 3400 | 0.0 | - |
| 0.6866 | 3450 | 0.0 | - |
| 0.6965 | 3500 | 0.0 | - |
| 0.7065 | 3550 | 0.0 | - |
| 0.7164 | 3600 | 0.0 | - |
| 0.7264 | 3650 | 0.0 | - |
| 0.7363 | 3700 | 0.0 | - |
| 0.7463 | 3750 | 0.0 | - |
| 0.7562 | 3800 | 0.0 | - |
| 0.7662 | 3850 | 0.0 | - |
| 0.7761 | 3900 | 0.0 | - |
| 0.7861 | 3950 | 0.0 | - |
| 0.7960 | 4000 | 0.0 | - |
| 0.8060 | 4050 | 0.0 | - |
| 0.8159 | 4100 | 0.0 | - |
| 0.8259 | 4150 | 0.0 | - |
| 0.8358 | 4200 | 0.0 | - |
| 0.8458 | 4250 | 0.0 | - |
| 0.8557 | 4300 | 0.0 | - |
| 0.8657 | 4350 | 0.0 | - |
| 0.8756 | 4400 | 0.0 | - |
| 0.8856 | 4450 | 0.0 | - |
| 0.8955 | 4500 | 0.0 | - |
| 0.9055 | 4550 | 0.0 | - |
| 0.9154 | 4600 | 0.0 | - |
| 0.9254 | 4650 | 0.0 | - |
| 0.9353 | 4700 | 0.0 | - |
| 0.9453 | 4750 | 0.0 | - |
| 0.9552 | 4800 | 0.0 | - |
| 0.9652 | 4850 | 0.0 | - |
| 0.9751 | 4900 | 0.0 | - |
| 0.9851 | 4950 | 0.0 | - |
| 0.9950 | 5000 | 0.0 | - |
### Framework Versions
- Python: 3.11.0rc1
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- Transformers: 4.39.0
- PyTorch: 2.3.1+cu121
- Datasets: 2.19.1
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "Para saber si un negocio va a funcionar, es necesario realizar un estudio de mercado, valorar la economía local durante un año, considerar la afluencia de personas y la ubicación, así como determinar el tamaño de la inversión."}, {"text": "Apoyo la opinión de Tyrexito y también reclamo al Banco Sabadell por sus comisiones."}, {"text": "Los resultados del Banco Sabadell impulsan al IBEX 35."}, {"text": "Aunque no pude retirar el bono de festividad en el cajero, ING y AKBANK rechazaron mis quejas, pero tras anunciar una denuncia, me transfirieron el dinero en una hora; si tienes razón, no te rindas."}, {"text": "El Gobierno presentará al nuevo gobernador del Banco de España en una Comisión del Congreso este jueves."}], "inference": true, "model-index": [{"name": "SetFit with sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.7739130434782608, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,282 |
NbAiLab/nb-whisper-small-verbatim
|
NbAiLab
|
automatic-speech-recognition
|
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"onnx",
"safetensors",
"whisper",
"automatic-speech-recognition",
"audio",
"asr",
"hf-asr-leaderboard",
"no",
"nb",
"nn",
"en",
"dataset:NbAiLab/ncc_speech",
"dataset:NbAiLab/NST",
"dataset:NbAiLab/NPSC",
"arxiv:2212.04356",
"base_model:openai/whisper-small",
"base_model:quantized:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-02-13T10:08:16Z |
2024-02-13T12:30:19+00:00
| 23 | 0 |
---
base_model: openai/whisper-small
datasets:
- NbAiLab/ncc_speech
- NbAiLab/NST
- NbAiLab/NPSC
language:
- 'no'
- nb
- nn
- en
library_name: transformers
license: apache-2.0
metrics:
- wer
- cer
pipeline_tag: automatic-speech-recognition
tags:
- audio
- asr
- automatic-speech-recognition
- hf-asr-leaderboard
widget:
- src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/1/audio/audio.mp3
example_title: FLEURS sample 1
- src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/4/audio/audio.mp3
example_title: FLEURS sample 2
---
# Finetuned Verbatim model.
This model is trained 200 additional steps on top of the model below. This makes it outputting only text in lowercase and without punctation. It is also considerably more verbatim, and will not make any attempt at correcting grammatical errors in the text
# NB-Whisper Small Verbatim
Introducing the **_Norwegian NB-Whisper Small Verbatim model_**, proudly developed by the National Library of Norway. NB-Whisper is a cutting-edge series of models designed for automatic speech recognition (ASR) and speech translation. These models are based on the work of [OpenAI's Whisper](https://arxiv.org/abs/2212.04356). Each model in the series has been trained for 250,000 steps, utilizing a diverse dataset of 8 million samples. These samples consist of aligned audio clips, each 30 seconds long, culminating in a staggering 66,000 hours of speech. For an in-depth understanding of our training methodology and dataset composition, keep an eye out for our upcoming article.
| Model Size | Parameters | Model |
|------------|------------|------------|
| Tiny | 39M | [NB-Whisper Tiny](https://huggingface.co/NbAiLab/nb-whisper-tiny) |
| Base | 74M | [NB-Whisper Base](https://huggingface.co/NbAiLab/nb-whisper-base) |
| Small | 244M | [NB-Whisper Small](https://huggingface.co/NbAiLab/nb-whisper-small) |
| Medium | 769M | [NB-Whisper Medium](https://huggingface.co/NbAiLab/nb-whisper-medium) |
| Large | 1550M | [NB-Whisper Large](https://huggingface.co/NbAiLab/nb-whisper-large) |
### Verbatim Model
While the main models are suitable for most transcription task, we demonstrate how easy it is to change the output of the main model. The following models are trained 250 additional steps from the main models above, and might be suitable for more targetted use cases:
- **Verbatim version**: This lower-cased variant is more literal and suitable for tasks requiring detailed transcription, such as linguistic analysis.
| Model Size | Parameters | Semantic version |
|------------|------------|------------------|
| Tiny | 39M | [Tiny - semantic](https://huggingface.co/NbAiLab/nb-whisper-tiny-semantic) |
| Base | 74M | [Base - semantic](https://huggingface.co/NbAiLab/nb-whisper-base-semantic) |
| Small | 244M | [Small - semantic](https://huggingface.co/NbAiLab/nb-whisper-small-semantic) |
| Medium | 769M | [Medium - semantic](https://huggingface.co/NbAiLab/nb-whisper-medium-semantic) |
| Large | 1550M | [Large - semantic](https://huggingface.co/NbAiLab/nb-whisper-large-semantic) |
### Model Description
- **Developed by:** [NB AI-Lab](https://ai.nb.no/)
- **Shared by:** [NB AI-Lab](https://ai.nb.no/)
- **Model type:** `whisper`
- **Language(s) (NLP):** Norwegian, Norwegian Bokmål, Norwegian Nynorsk, English
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
- **Trained from model:** [openai/whisper-small](https://huggingface.co/openai/whisper-small)
- **Code Repository:** https://github.com/NbAiLab/nb-whisper/
- **Paper:** _Coming soon_
- **Demo:** _See Spaces on this page_
## How to Use the Models
### Online Demos
You can try the models directly through the HuggingFace Inference API, accessible on the right side of this page. Be aware that initially, the model needs to load and will run on limited CPU capacity, which might be slow. To enhance your experience, we are temporarily hosting some models on TPUs for a few days, significantly boosting their performance. Explore these under the **Spaces** section on the [Main Page](https://huggingface.co/NbAiLab/).
### Local Setup with HuggingFace
Alternatively, you can run the models locally. The Tiny, Base, and Small models are optimized for CPU execution. For the Medium and Large models, we recommend a system equipped with a GPU to ensure efficient processing. Setting up and using these models with HuggingFace's Transformers is straightforward, provided you have [Python](https://www.python.org/downloads/) installed on your machine. For practical demonstrations, refer to examples using this [sample mp3 file](https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3).
```bash
# Download the sample file
$ wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3
# Install necessary libraries.
$ pip install transformers>=4.35.2
```
After this is done, you should be able to run this in Python:
```python
from transformers import pipeline
# Load the model
asr = pipeline("automatic-speech-recognition", "NbAiLabBeta/nb-whisper-medium-verbatim")
#transcribe
asr("king.mp3", generate_kwargs={'task': 'transcribe', 'language': 'no'})
```
<details>
<summary>Expected output</summary>
```json
{
{'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra.'}
}
```
</details>
#### Extended HuggingFace
Examining the output above, we see that there are multiple repetitions at the end. This is because the video is longer than 30 seconds. By passing the ```chunk_lengt_s``` argument, we can transcribe longer file. Our experience is that we get slightly better result by setting that to 28 seconds instead of the default 30 seconds. We also recommend setting the beam size to 5 if possible. This greatly increases the accuracy but takes a bit longer and requires slightly more memory. The examples below also illustrates how to transcribe to English or Nynorsk, and how to get timestamps for sentences and words.
```python
# Long Transcripts
asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'no'})
# Increase accuracy by setting beam size to 5
asr("king.mp3", chunk_length_s=28, return_timestamps=True, generate_kwargs={'num_beams': 5, 'task': 'transcribe', 'language': 'no'})
# Return Timestamps
asr("king.mp3", chunk_length_s=28, return_timestamps=True, generate_kwargs={'task': 'transcribe', 'language': 'no'})
# Return Word Level Timestamps
asr("king.mp3", chunk_length_s=28, return_timestamps="word", generate_kwargs={'task': 'transcribe', 'language': 'no'})
# Transcribe to Nynorsk
asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'nn'})
# Transcribe to English
asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'en'})
```
<details>
<summary>Expected output</summary>
Long transcripts:
```json
{
{'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra, hvilken nasjonalitet vi tilhører. Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbilis og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt største håp for Norge er at vi skal klare å ta vare på hverandre, at vi skal bygge dette landet videre på tillit, fellesskap og raushet.'}
}
```
Timestamps:
```json
{
{'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. hvilken nasjonalitet vi tilhører. Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt største håp for Norge er at vi skal klare å ta vare på hverandre, at vi skal bygge dette landet videre på tillit, fellesskap og raushet.',
'chunks': [{'timestamp': (0.0, 5.46),
'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger'},
{'timestamp': (5.52, 8.68), 'text': ' og folk fra alle andre regioner.'},
{'timestamp': (8.68, 16.64),
'text': ' Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria.'},
{'timestamp': (16.64, 13.3),
'text': ' Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra.'},
{'timestamp': (13.32, 30.28),
'text': ' Hvilken nasjonalitet vi er fra. hvilken nasjonalitet vi tilhører.'},
{'timestamp': (32.52, 39.16),
'text': ' Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres'},
{'timestamp': (39.16, 42.0), 'text': ' innenfor landegrenser.'},
{'timestamp': (42.0, 46.74),
'text': ' Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter,'},
{'timestamp': (46.74, 51.12),
'text': ' og jenter og gutter som er glad i hverandre.'},
{'timestamp': (51.16, 57.42),
'text': ' Nordmenn trommer på Gud, Allah, Altet og ingenting.'},
{'timestamp': (57.42, 64.3),
'text': ' Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes.'},
{'timestamp': (64.34, 71.24),
'text': ' Med andre ord, Norge er dere. Norge er oss.'},
{'timestamp': (71.24, 78.04),
'text': ' Mitt største håp for Norge er at vi skal klare å ta vare på hverandre,'},
{'timestamp': (78.12, 84.68),
'text': ' at vi skal bygge dette landet videre på tillit, fellesskap og raushet.'}]}
}
```
Word Level Timestamps:
```json
{
{"text": "Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi tilhører. Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbilis og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt største håp for Norge er at vi skal klare å ta vare på hverandre, at vi skal bygge dette landet videre på tillit, fellesskap og raushet.",
"chunks": [
{"text": "Nordmenn", "timestamp": [0.72, 1.42]},
{"text": "er", "timestamp": [1.42, 1.74]},
// ... more chunks ...
{"text": "raushet.", "timestamp": [83.1, 84.88]}
]
}
}
```
Nynorsk:
```json
{
{"text": "Nordmenn er nordlendingar, trøndarar, sørlendingar og folk frå alle andre regionar. Nordmenn er også innvandra frå Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikkje alltid så lett å seie kvar vi er frå, kva nasjonalitet vi tilhøyrer. Det vi kallar heim, er der hjartet vårt er, og det kan ikkje alltid plasserast innanfor landegrenser. Nordmenn er jenter som er glad i jenter, gutar som erade i gutar, og jenter og gutar som er glade i kvarandre. Nordmenn trommar på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes. Med andre ord, Noreg er dere! Noreg er oss. Mitt største håp for Noreg er at vi skal klare å ta vare på kvarandre, at vi skal byggje dette landet vidare på tillit, fellesskap og raushet."}
}
```
English:
```json
{
{"text": "Norwegians are Norwegians, trønders, southerners and people from all other regions. Norwegians are also invaded from Afghanistan, Pakistan, Poland, Sweden, Somalia and Suria. It is not always so easy to say where we are from, what nationality we belong to. What we call home is where our heart is, and it cannot always be placed within national borders. Norwegians are girls who like girls, boys who like boys, and girls and boys who like each other. Norwegians thrump on God, Allah, Altet and nothing. Norwegians like Grieg, Kygo, Helbilis and Kari Bremnes. In other words, Norway is you. Norway is us. My biggest hope for Norway is that we should be able to take care of each other, that we should build this country on trust, community and generosity."}
}
```
</details>
### Whisper CPP
Whisper CPP is a C++ implementation of the Whisper model, offering the same functionalities with the added benefits of C++ efficiency and performance optimizations. This allows embedding any Whisper model into a binary file, facilitating the development of real applications. However, it requires some familiarity with compiling C++ programs. Their [homepage](https://github.com/ggerganov/whisper.cpp) provides examples of how to build applications, including real-time transcription.
We have converted this model to the ggml-format model used by Whisper CPP binaries. The file can be downloaded [here](blob/main/ggml-model.bin), and a `q5_0` quantized version is also available [here](blob/main/ggml-model-q5_0.bin).
```bash
# We can download and compile whisper.cpp
$ git clone --depth 1 https://github.com/ggerganov/whisper.cpp --branch v1.5.1
$ cd whisper.cpp/
$ make
# We also need to convert the audio to WAV as that is the only format supported by whisper.cpp
$ wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3
$ ffmpeg -i king.mp3 -ar 16000 -ac 1 -c:a pcm_s16le king.wav
# Lets download the two ggml-files from this site
wget -N https://huggingface.co/NbAiLab/nb-whisper-small/resolve/main/ggml-model.bin -O models/nb-small-ggml-model.bin
wget -N https://huggingface.co/NbAiLab/nb-whisper-small/resolve/main/ggml-model-q5_0.bin -O models/nb-small-ggml-model-q5_0.bin
# And run it with the f16 default model
$ ./main -l no -m models/nb-small-ggml-model.bin king.wav
# Or the quantized version
$ ./main -l no -m models/nb-small-ggml-model-q5_0.bin king.wav
```
### WhisperX and Speaker Diarization
Speaker diarization is a technique in natural language processing and automatic speech recognition that identifies and separates different speakers in an audio recording. It segments the audio into parts based on who is speaking, enhancing the quality of transcribing meetings or phone calls. We find that [WhisperX](https://github.com/m-bain/whisperX) is the easiest way to use our models for diarizing speech. In addition, WhisperX is using phoneme-based Wav2Vec-models for improving the alignment of the timestamps. As of December 2023 it also has native support for using the nb-wav2vec-models. It currently uses [PyAnnote-audio](https://github.com/pyannote/pyannote-audio) for doing the actual diarization. This package has a fairly strict licence where you have to agree to user terms. Follow the instructions below.
```bash
# Follow the install instructions on https://github.com/m-bain/whisperX
# Make sure you have a HuggingFace account and have agreed to the pyannote terms
# Log in (or supply HF Token in command line)
huggingface-cli login
# Download a test file
wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/knuthamsun.mp3
# Optional. If you get complains about not support for Norwegian, do:
pip uninstall whisperx && pip install git+https://github.com/m-bain/whisperx.git@8540ff5985fceee764acbed94f656063d7f56540
# Transcribe the test file. All transcripts will end up in the directory of the mp3-file
whisperx knuthamsun.mp3 --model NbAiLabBeta/nb-whisper-medium-verbatim --language no --diarize
```
You can also run WhisperX from Python. Please take a look at the instructions on [WhisperX homepage](https://github.com/m-bain/whisperX).
### API
Instructions for accessing the models via a simple API are included in the demos under Spaces. Note that these demos are temporary and will only be available for a few weeks.
## Training Data
The training data originates from Språkbanken and the National Library of Norway's digital collection, including:
- NST Norwegian ASR Database (16 kHz) and its corresponding dataset
- Transcribed speeches from the Norwegian Parliament by Språkbanken
- TV broadcast (NRK) subtitles (NLN digital collection)
- Audiobooks (NLN digital collection)
## Downstream Use
The models, especially the smaller ones, may exhibit occasional hallucinations and may drop parts of the transcript. They are designed to convert spoken language into grammatically correct written sentences, which might not always be word-for-word translations. We have made two extra model variant for users that want a different transcription style. We encourage users to try the models themselves to get a better understanding.
## Bias, Risks, and Limitations
Using these models without adequate risk assessment and mitigation could be considered irresponsible. They may contain biases or other undesirable distortions. Users who deploy these models or integrate them into systems or services are responsible for mitigating risks and complying with applicable AI regulations. The National Library of Norway, as the model owner, disclaims liability for any outcomes resulting from third-party use of these models.
### Software
The model was trained using Jax/Flax and converted to PyTorch, Tensorflow, whisper.cpp, and ONXX formats. These are available under `Files and versions`. We welcome requests for conversion to other formats. All training code and scripts are released under the Apache License 2.0 in the GitHub repository [nb-whisper](https://github.com/NbAiLab/nb-whisper/).
## Citation & Contributors
The NB-Whisper Small Verbatim model is a product of the NoSTram project led by Per Egil Kummervold ([@pere](https://huggingface.co/pere)) at the National Library of Norway. Key contributors include Javier de la Rosa ([@versae](https://huggingface.co/versae)), Freddy Wetjen ([@freddyw](https://huggingface.co/freddyw)), and Rolv-Arild Braaten ([@Rolv-Arild](https://huggingface.co/Rolv-Arild)). NB AI-Lab, under the direction of Svein Arne Brygfjeld ([@Brygfjeld](https://huggingface.co/Brygfjeld)), supported the project's successful completion. A detailed paper on our process and findings is forthcoming.
## Disclaimer
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (The National Library of Norway) be liable for any results arising from the use made by third parties of these models.
## Acknowledgements
Our gratitude extends to [Google TPU Research Cloud](https://sites.research.google/trc/about/) for training resources, Google Cloud for translation credits, and HuggingFace's Sanchit Ghandi for technical support. A special thank you to Per Erik Solberg at Språkbanken for the collaboration on the Stortinget corpus.
## Contact
For feedback, technical concerns, or collaboration inquiries, please contact <a rel="noopener nofollow" href="mailto:[email protected]">[email protected]</a>. If you plan to include this model in your research, contact us for the latest information on our upcoming paper for citation purposes.
| null |
Non_BioNLP
|
# Finetuned Verbatim model.
This model is trained 200 additional steps on top of the model below. This makes it outputting only text in lowercase and without punctation. It is also considerably more verbatim, and will not make any attempt at correcting grammatical errors in the text
# NB-Whisper Small Verbatim
Introducing the **_Norwegian NB-Whisper Small Verbatim model_**, proudly developed by the National Library of Norway. NB-Whisper is a cutting-edge series of models designed for automatic speech recognition (ASR) and speech translation. These models are based on the work of [OpenAI's Whisper](https://arxiv.org/abs/2212.04356). Each model in the series has been trained for 250,000 steps, utilizing a diverse dataset of 8 million samples. These samples consist of aligned audio clips, each 30 seconds long, culminating in a staggering 66,000 hours of speech. For an in-depth understanding of our training methodology and dataset composition, keep an eye out for our upcoming article.
| Model Size | Parameters | Model |
|------------|------------|------------|
| Tiny | 39M | [NB-Whisper Tiny](https://huggingface.co/NbAiLab/nb-whisper-tiny) |
| Base | 74M | [NB-Whisper Base](https://huggingface.co/NbAiLab/nb-whisper-base) |
| Small | 244M | [NB-Whisper Small](https://huggingface.co/NbAiLab/nb-whisper-small) |
| Medium | 769M | [NB-Whisper Medium](https://huggingface.co/NbAiLab/nb-whisper-medium) |
| Large | 1550M | [NB-Whisper Large](https://huggingface.co/NbAiLab/nb-whisper-large) |
### Verbatim Model
While the main models are suitable for most transcription task, we demonstrate how easy it is to change the output of the main model. The following models are trained 250 additional steps from the main models above, and might be suitable for more targetted use cases:
- **Verbatim version**: This lower-cased variant is more literal and suitable for tasks requiring detailed transcription, such as linguistic analysis.
| Model Size | Parameters | Semantic version |
|------------|------------|------------------|
| Tiny | 39M | [Tiny - semantic](https://huggingface.co/NbAiLab/nb-whisper-tiny-semantic) |
| Base | 74M | [Base - semantic](https://huggingface.co/NbAiLab/nb-whisper-base-semantic) |
| Small | 244M | [Small - semantic](https://huggingface.co/NbAiLab/nb-whisper-small-semantic) |
| Medium | 769M | [Medium - semantic](https://huggingface.co/NbAiLab/nb-whisper-medium-semantic) |
| Large | 1550M | [Large - semantic](https://huggingface.co/NbAiLab/nb-whisper-large-semantic) |
### Model Description
- **Developed by:** [NB AI-Lab](https://ai.nb.no/)
- **Shared by:** [NB AI-Lab](https://ai.nb.no/)
- **Model type:** `whisper`
- **Language(s) (NLP):** Norwegian, Norwegian Bokmål, Norwegian Nynorsk, English
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
- **Trained from model:** [openai/whisper-small](https://huggingface.co/openai/whisper-small)
- **Code Repository:** https://github.com/NbAiLab/nb-whisper/
- **Paper:** _Coming soon_
- **Demo:** _See Spaces on this page_
## How to Use the Models
### Online Demos
You can try the models directly through the HuggingFace Inference API, accessible on the right side of this page. Be aware that initially, the model needs to load and will run on limited CPU capacity, which might be slow. To enhance your experience, we are temporarily hosting some models on TPUs for a few days, significantly boosting their performance. Explore these under the **Spaces** section on the [Main Page](https://huggingface.co/NbAiLab/).
### Local Setup with HuggingFace
Alternatively, you can run the models locally. The Tiny, Base, and Small models are optimized for CPU execution. For the Medium and Large models, we recommend a system equipped with a GPU to ensure efficient processing. Setting up and using these models with HuggingFace's Transformers is straightforward, provided you have [Python](https://www.python.org/downloads/) installed on your machine. For practical demonstrations, refer to examples using this [sample mp3 file](https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3).
```bash
# Download the sample file
$ wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3
# Install necessary libraries.
$ pip install transformers>=4.35.2
```
After this is done, you should be able to run this in Python:
```python
from transformers import pipeline
# Load the model
asr = pipeline("automatic-speech-recognition", "NbAiLabBeta/nb-whisper-medium-verbatim")
#transcribe
asr("king.mp3", generate_kwargs={'task': 'transcribe', 'language': 'no'})
```
<details>
<summary>Expected output</summary>
```json
{
{'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra.'}
}
```
</details>
#### Extended HuggingFace
Examining the output above, we see that there are multiple repetitions at the end. This is because the video is longer than 30 seconds. By passing the ```chunk_lengt_s``` argument, we can transcribe longer file. Our experience is that we get slightly better result by setting that to 28 seconds instead of the default 30 seconds. We also recommend setting the beam size to 5 if possible. This greatly increases the accuracy but takes a bit longer and requires slightly more memory. The examples below also illustrates how to transcribe to English or Nynorsk, and how to get timestamps for sentences and words.
```python
# Long Transcripts
asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'no'})
# Increase accuracy by setting beam size to 5
asr("king.mp3", chunk_length_s=28, return_timestamps=True, generate_kwargs={'num_beams': 5, 'task': 'transcribe', 'language': 'no'})
# Return Timestamps
asr("king.mp3", chunk_length_s=28, return_timestamps=True, generate_kwargs={'task': 'transcribe', 'language': 'no'})
# Return Word Level Timestamps
asr("king.mp3", chunk_length_s=28, return_timestamps="word", generate_kwargs={'task': 'transcribe', 'language': 'no'})
# Transcribe to Nynorsk
asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'nn'})
# Transcribe to English
asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'en'})
```
<details>
<summary>Expected output</summary>
Long transcripts:
```json
{
{'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra, hvilken nasjonalitet vi tilhører. Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbilis og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt største håp for Norge er at vi skal klare å ta vare på hverandre, at vi skal bygge dette landet videre på tillit, fellesskap og raushet.'}
}
```
Timestamps:
```json
{
{'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. hvilken nasjonalitet vi tilhører. Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt største håp for Norge er at vi skal klare å ta vare på hverandre, at vi skal bygge dette landet videre på tillit, fellesskap og raushet.',
'chunks': [{'timestamp': (0.0, 5.46),
'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger'},
{'timestamp': (5.52, 8.68), 'text': ' og folk fra alle andre regioner.'},
{'timestamp': (8.68, 16.64),
'text': ' Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria.'},
{'timestamp': (16.64, 13.3),
'text': ' Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra.'},
{'timestamp': (13.32, 30.28),
'text': ' Hvilken nasjonalitet vi er fra. hvilken nasjonalitet vi tilhører.'},
{'timestamp': (32.52, 39.16),
'text': ' Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres'},
{'timestamp': (39.16, 42.0), 'text': ' innenfor landegrenser.'},
{'timestamp': (42.0, 46.74),
'text': ' Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter,'},
{'timestamp': (46.74, 51.12),
'text': ' og jenter og gutter som er glad i hverandre.'},
{'timestamp': (51.16, 57.42),
'text': ' Nordmenn trommer på Gud, Allah, Altet og ingenting.'},
{'timestamp': (57.42, 64.3),
'text': ' Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes.'},
{'timestamp': (64.34, 71.24),
'text': ' Med andre ord, Norge er dere. Norge er oss.'},
{'timestamp': (71.24, 78.04),
'text': ' Mitt største håp for Norge er at vi skal klare å ta vare på hverandre,'},
{'timestamp': (78.12, 84.68),
'text': ' at vi skal bygge dette landet videre på tillit, fellesskap og raushet.'}]}
}
```
Word Level Timestamps:
```json
{
{"text": "Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi tilhører. Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbilis og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt største håp for Norge er at vi skal klare å ta vare på hverandre, at vi skal bygge dette landet videre på tillit, fellesskap og raushet.",
"chunks": [
{"text": "Nordmenn", "timestamp": [0.72, 1.42]},
{"text": "er", "timestamp": [1.42, 1.74]},
// ... more chunks ...
{"text": "raushet.", "timestamp": [83.1, 84.88]}
]
}
}
```
Nynorsk:
```json
{
{"text": "Nordmenn er nordlendingar, trøndarar, sørlendingar og folk frå alle andre regionar. Nordmenn er også innvandra frå Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikkje alltid så lett å seie kvar vi er frå, kva nasjonalitet vi tilhøyrer. Det vi kallar heim, er der hjartet vårt er, og det kan ikkje alltid plasserast innanfor landegrenser. Nordmenn er jenter som er glad i jenter, gutar som erade i gutar, og jenter og gutar som er glade i kvarandre. Nordmenn trommar på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes. Med andre ord, Noreg er dere! Noreg er oss. Mitt største håp for Noreg er at vi skal klare å ta vare på kvarandre, at vi skal byggje dette landet vidare på tillit, fellesskap og raushet."}
}
```
English:
```json
{
{"text": "Norwegians are Norwegians, trønders, southerners and people from all other regions. Norwegians are also invaded from Afghanistan, Pakistan, Poland, Sweden, Somalia and Suria. It is not always so easy to say where we are from, what nationality we belong to. What we call home is where our heart is, and it cannot always be placed within national borders. Norwegians are girls who like girls, boys who like boys, and girls and boys who like each other. Norwegians thrump on God, Allah, Altet and nothing. Norwegians like Grieg, Kygo, Helbilis and Kari Bremnes. In other words, Norway is you. Norway is us. My biggest hope for Norway is that we should be able to take care of each other, that we should build this country on trust, community and generosity."}
}
```
</details>
### Whisper CPP
Whisper CPP is a C++ implementation of the Whisper model, offering the same functionalities with the added benefits of C++ efficiency and performance optimizations. This allows embedding any Whisper model into a binary file, facilitating the development of real applications. However, it requires some familiarity with compiling C++ programs. Their [homepage](https://github.com/ggerganov/whisper.cpp) provides examples of how to build applications, including real-time transcription.
We have converted this model to the ggml-format model used by Whisper CPP binaries. The file can be downloaded [here](blob/main/ggml-model.bin), and a `q5_0` quantized version is also available [here](blob/main/ggml-model-q5_0.bin).
```bash
# We can download and compile whisper.cpp
$ git clone --depth 1 https://github.com/ggerganov/whisper.cpp --branch v1.5.1
$ cd whisper.cpp/
$ make
# We also need to convert the audio to WAV as that is the only format supported by whisper.cpp
$ wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3
$ ffmpeg -i king.mp3 -ar 16000 -ac 1 -c:a pcm_s16le king.wav
# Lets download the two ggml-files from this site
wget -N https://huggingface.co/NbAiLab/nb-whisper-small/resolve/main/ggml-model.bin -O models/nb-small-ggml-model.bin
wget -N https://huggingface.co/NbAiLab/nb-whisper-small/resolve/main/ggml-model-q5_0.bin -O models/nb-small-ggml-model-q5_0.bin
# And run it with the f16 default model
$ ./main -l no -m models/nb-small-ggml-model.bin king.wav
# Or the quantized version
$ ./main -l no -m models/nb-small-ggml-model-q5_0.bin king.wav
```
### WhisperX and Speaker Diarization
Speaker diarization is a technique in natural language processing and automatic speech recognition that identifies and separates different speakers in an audio recording. It segments the audio into parts based on who is speaking, enhancing the quality of transcribing meetings or phone calls. We find that [WhisperX](https://github.com/m-bain/whisperX) is the easiest way to use our models for diarizing speech. In addition, WhisperX is using phoneme-based Wav2Vec-models for improving the alignment of the timestamps. As of December 2023 it also has native support for using the nb-wav2vec-models. It currently uses [PyAnnote-audio](https://github.com/pyannote/pyannote-audio) for doing the actual diarization. This package has a fairly strict licence where you have to agree to user terms. Follow the instructions below.
```bash
# Follow the install instructions on https://github.com/m-bain/whisperX
# Make sure you have a HuggingFace account and have agreed to the pyannote terms
# Log in (or supply HF Token in command line)
huggingface-cli login
# Download a test file
wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/knuthamsun.mp3
# Optional. If you get complains about not support for Norwegian, do:
pip uninstall whisperx && pip install git+https://github.com/m-bain/whisperx.git@8540ff5985fceee764acbed94f656063d7f56540
# Transcribe the test file. All transcripts will end up in the directory of the mp3-file
whisperx knuthamsun.mp3 --model NbAiLabBeta/nb-whisper-medium-verbatim --language no --diarize
```
You can also run WhisperX from Python. Please take a look at the instructions on [WhisperX homepage](https://github.com/m-bain/whisperX).
### API
Instructions for accessing the models via a simple API are included in the demos under Spaces. Note that these demos are temporary and will only be available for a few weeks.
## Training Data
The training data originates from Språkbanken and the National Library of Norway's digital collection, including:
- NST Norwegian ASR Database (16 kHz) and its corresponding dataset
- Transcribed speeches from the Norwegian Parliament by Språkbanken
- TV broadcast (NRK) subtitles (NLN digital collection)
- Audiobooks (NLN digital collection)
## Downstream Use
The models, especially the smaller ones, may exhibit occasional hallucinations and may drop parts of the transcript. They are designed to convert spoken language into grammatically correct written sentences, which might not always be word-for-word translations. We have made two extra model variant for users that want a different transcription style. We encourage users to try the models themselves to get a better understanding.
## Bias, Risks, and Limitations
Using these models without adequate risk assessment and mitigation could be considered irresponsible. They may contain biases or other undesirable distortions. Users who deploy these models or integrate them into systems or services are responsible for mitigating risks and complying with applicable AI regulations. The National Library of Norway, as the model owner, disclaims liability for any outcomes resulting from third-party use of these models.
### Software
The model was trained using Jax/Flax and converted to PyTorch, Tensorflow, whisper.cpp, and ONXX formats. These are available under `Files and versions`. We welcome requests for conversion to other formats. All training code and scripts are released under the Apache License 2.0 in the GitHub repository [nb-whisper](https://github.com/NbAiLab/nb-whisper/).
## Citation & Contributors
The NB-Whisper Small Verbatim model is a product of the NoSTram project led by Per Egil Kummervold ([@pere](https://huggingface.co/pere)) at the National Library of Norway. Key contributors include Javier de la Rosa ([@versae](https://huggingface.co/versae)), Freddy Wetjen ([@freddyw](https://huggingface.co/freddyw)), and Rolv-Arild Braaten ([@Rolv-Arild](https://huggingface.co/Rolv-Arild)). NB AI-Lab, under the direction of Svein Arne Brygfjeld ([@Brygfjeld](https://huggingface.co/Brygfjeld)), supported the project's successful completion. A detailed paper on our process and findings is forthcoming.
## Disclaimer
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (The National Library of Norway) be liable for any results arising from the use made by third parties of these models.
## Acknowledgements
Our gratitude extends to [Google TPU Research Cloud](https://sites.research.google/trc/about/) for training resources, Google Cloud for translation credits, and HuggingFace's Sanchit Ghandi for technical support. A special thank you to Per Erik Solberg at Språkbanken for the collaboration on the Stortinget corpus.
## Contact
For feedback, technical concerns, or collaboration inquiries, please contact <a rel="noopener nofollow" href="mailto:[email protected]">[email protected]</a>. If you plan to include this model in your research, contact us for the latest information on our upcoming paper for citation purposes.
|
{"base_model": "openai/whisper-small", "datasets": ["NbAiLab/ncc_speech", "NbAiLab/NST", "NbAiLab/NPSC"], "language": ["no", "nb", "nn", "en"], "library_name": "transformers", "license": "apache-2.0", "metrics": ["wer", "cer"], "pipeline_tag": "automatic-speech-recognition", "tags": ["audio", "asr", "automatic-speech-recognition", "hf-asr-leaderboard"], "widget": [{"src": "https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/1/audio/audio.mp3", "example_title": "FLEURS sample 1"}, {"src": "https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/4/audio/audio.mp3", "example_title": "FLEURS sample 2"}]}
|
task
|
[
"TRANSLATION"
] | 42,283 |
YounghoonNoh/distilbert-base-uncased-finetuned-emotion
|
YounghoonNoh
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-04-11T07:25:22Z |
2024-04-11T07:37:24+00:00
| 19 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.928
name: Accuracy
- type: f1
value: 0.9279943563389383
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2183
- Accuracy: 0.928
- F1: 0.9280
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8436 | 1.0 | 250 | 0.3161 | 0.909 | 0.9084 |
| 0.2535 | 2.0 | 500 | 0.2183 | 0.928 | 0.9280 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2183
- Accuracy: 0.928
- F1: 0.9280
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8436 | 1.0 | 250 | 0.3161 | 0.909 | 0.9084 |
| 0.2535 | 2.0 | 500 | 0.2183 | 0.928 | 0.9280 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.928, "name": "Accuracy"}, {"type": "f1", "value": 0.9279943563389383, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,284 |
Helsinki-NLP/opus-mt-lg-fi
|
Helsinki-NLP
|
translation
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lg",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2023-08-16T11:59:58+00:00
| 37 | 0 |
---
license: apache-2.0
tags:
- translation
---
### opus-mt-lg-fi
* source languages: lg
* target languages: fi
* OPUS readme: [lg-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lg-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/lg-fi/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-fi/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-fi/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lg.fi | 21.8 | 0.424 |
| null |
Non_BioNLP
|
### opus-mt-lg-fi
* source languages: lg
* target languages: fi
* OPUS readme: [lg-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lg-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/lg-fi/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-fi/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-fi/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lg.fi | 21.8 | 0.424 |
|
{"license": "apache-2.0", "tags": ["translation"]}
|
task
|
[
"TRANSLATION"
] | 42,286 |
Junhee98/distilbert-base-uncased-finetuned-emotion
|
Junhee98
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-05-29T05:14:01Z |
2024-05-29T05:18:48+00:00
| 4 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.927
name: Accuracy
- type: f1
value: 0.9269921912911402
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2139
- Accuracy: 0.927
- F1: 0.9270
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8665 | 1.0 | 250 | 0.3173 | 0.912 | 0.9116 |
| 0.2486 | 2.0 | 500 | 0.2139 | 0.927 | 0.9270 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2139
- Accuracy: 0.927
- F1: 0.9270
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8665 | 1.0 | 250 | 0.3173 | 0.912 | 0.9116 |
| 0.2486 | 2.0 | 500 | 0.2139 | 0.927 | 0.9270 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
{"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.927, "name": "Accuracy"}, {"type": "f1", "value": 0.9269921912911402, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,287 |
Sif10/my_awesome_model_imdb
|
Sif10
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-03-10T12:08:16Z |
2024-03-10T13:05:54+00:00
| 11 | 0 |
---
base_model: distilbert/distilbert-base-uncased
datasets:
- imdb
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: my_awesome_model_imdb
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- type: accuracy
value: 0.85908
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model_imdb
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7781
- Accuracy: 0.8591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4013 | 1.0 | 782 | 0.3535 | 0.8445 |
| 0.2107 | 2.0 | 1564 | 0.3589 | 0.8550 |
| 0.1158 | 3.0 | 2346 | 0.5241 | 0.8576 |
| 0.0423 | 4.0 | 3128 | 0.7881 | 0.8545 |
| 0.0238 | 5.0 | 3910 | 0.7781 | 0.8591 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model_imdb
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7781
- Accuracy: 0.8591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4013 | 1.0 | 782 | 0.3535 | 0.8445 |
| 0.2107 | 2.0 | 1564 | 0.3589 | 0.8550 |
| 0.1158 | 3.0 | 2346 | 0.5241 | 0.8576 |
| 0.0423 | 4.0 | 3128 | 0.7881 | 0.8545 |
| 0.0238 | 5.0 | 3910 | 0.7781 | 0.8591 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2
|
{"base_model": "distilbert/distilbert-base-uncased", "datasets": ["imdb"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "my_awesome_model_imdb", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "config": "plain_text", "split": "test", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 0.85908, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,288 |
HPLT/sft-fpft-pt-bloom-560m
|
HPLT
| null |
[
"generation",
"question answering",
"instruction tuning",
"pt",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | 2024-04-04T16:09:09Z |
2024-04-04T16:09:11+00:00
| 0 | 0 |
---
language:
- pt
license: cc-by-nc-4.0
tags:
- generation
- question answering
- instruction tuning
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with full-parameter fine-tuning and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bloom-560m](https://huggingface.co/bloom-560m)
* Instruction tuning language: Portuguese
* Training method: full-parameter fine-tuning.
* Best checkpoint: best cross-entropy on a validation set, trained for 3 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded using `transformers` library.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/fpft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
| null |
Non_BioNLP
|
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with full-parameter fine-tuning and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bloom-560m](https://huggingface.co/bloom-560m)
* Instruction tuning language: Portuguese
* Training method: full-parameter fine-tuning.
* Best checkpoint: best cross-entropy on a validation set, trained for 3 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded using `transformers` library.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/fpft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
{"language": ["pt"], "license": "cc-by-nc-4.0", "tags": ["generation", "question answering", "instruction tuning"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,289 |
google/t5-v1_1-base
|
google
|
text2text-generation
|
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2002.05202",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2023-01-24T16:52:30+00:00
| 80,054 | 56 |
---
datasets:
- c4
language: en
license: apache-2.0
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Version 1.1
## Version 1.1
[T5 Version 1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/released_checkpoints.md#t511) includes the following improvements compared to the original T5 model- GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202).
- Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning.
- Pre-trained on C4 only without mixing in the downstream tasks.
- no parameter sharing between embedding and classifier layer
- "xl" and "xxl" replace "3B" and "11B". The model shapes are a bit different - larger `d_model` and smaller `num_heads` and `d_ff`.
**Note**: T5 Version 1.1 was only pre-trained on C4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
Pretraining Dataset: [C4](https://huggingface.co/datasets/c4)
Other Community Checkpoints: [here](https://huggingface.co/models?search=t5-v1_1)
Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*
## Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

| null |
Non_BioNLP
|
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Version 1.1
## Version 1.1
[T5 Version 1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/released_checkpoints.md#t511) includes the following improvements compared to the original T5 model- GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202).
- Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning.
- Pre-trained on C4 only without mixing in the downstream tasks.
- no parameter sharing between embedding and classifier layer
- "xl" and "xxl" replace "3B" and "11B". The model shapes are a bit different - larger `d_model` and smaller `num_heads` and `d_ff`.
**Note**: T5 Version 1.1 was only pre-trained on C4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
Pretraining Dataset: [C4](https://huggingface.co/datasets/c4)
Other Community Checkpoints: [here](https://huggingface.co/models?search=t5-v1_1)
Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*
## Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

|
{"datasets": ["c4"], "language": "en", "license": "apache-2.0"}
|
task
|
[
"TEXT_CLASSIFICATION",
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 42,290 |
dimasik2987/a04c8b26-84ce-4163-b06a-fda53afae0bd
|
dimasik2987
| null |
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Mistral-7b-64k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-64k",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | 2025-01-23T07:44:31Z |
2025-01-23T08:07:22+00:00
| 1 | 0 |
---
base_model: NousResearch/Yarn-Mistral-7b-64k
library_name: peft
license: apache-2.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a04c8b26-84ce-4163-b06a-fda53afae0bd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Mistral-7b-64k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c54c4cfeb1403ba8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c54c4cfeb1403ba8_train_data.json
type:
field_instruction: hieroglyphs
field_output: translation
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: dimasik2987/a04c8b26-84ce-4163-b06a-fda53afae0bd
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 3
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/c54c4cfeb1403ba8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6813de76-c54d-49f6-88c5-cfc3d6c7ec03
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6813de76-c54d-49f6-88c5-cfc3d6c7ec03
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# a04c8b26-84ce-4163-b06a-fda53afae0bd
This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-64k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8521
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0013 | 1 | 3.2015 |
| 8.191 | 0.0067 | 5 | 2.4169 |
| 6.9339 | 0.0134 | 10 | 1.9922 |
| 6.4111 | 0.0201 | 15 | 1.8906 |
| 6.2029 | 0.0268 | 20 | 1.8688 |
| 6.4598 | 0.0335 | 25 | 1.8550 |
| 6.6 | 0.0402 | 30 | 1.8521 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Mistral-7b-64k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c54c4cfeb1403ba8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c54c4cfeb1403ba8_train_data.json
type:
field_instruction: hieroglyphs
field_output: translation
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: dimasik2987/a04c8b26-84ce-4163-b06a-fda53afae0bd
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 3
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/c54c4cfeb1403ba8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6813de76-c54d-49f6-88c5-cfc3d6c7ec03
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6813de76-c54d-49f6-88c5-cfc3d6c7ec03
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# a04c8b26-84ce-4163-b06a-fda53afae0bd
This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-64k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8521
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0013 | 1 | 3.2015 |
| 8.191 | 0.0067 | 5 | 2.4169 |
| 6.9339 | 0.0134 | 10 | 1.9922 |
| 6.4111 | 0.0201 | 15 | 1.8906 |
| 6.2029 | 0.0268 | 20 | 1.8688 |
| 6.4598 | 0.0335 | 25 | 1.8550 |
| 6.6 | 0.0402 | 30 | 1.8521 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
{"base_model": "NousResearch/Yarn-Mistral-7b-64k", "library_name": "peft", "license": "apache-2.0", "tags": ["axolotl", "generated_from_trainer"], "model-index": [{"name": "a04c8b26-84ce-4163-b06a-fda53afae0bd", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 42,291 |
zbigi/bart-large-summarization-medical-46
|
zbigi
| null |
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:facebook/bart-large",
"base_model:adapter:facebook/bart-large",
"license:apache-2.0",
"region:us"
] | 2024-07-23T15:42:37Z |
2024-07-23T19:42:33+00:00
| 4 | 0 |
---
base_model: facebook/bart-large
library_name: peft
license: apache-2.0
metrics:
- rouge
tags:
- generated_from_trainer
model-index:
- name: bart-large-summarization-medical-46
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-summarization-medical-46
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8378
- Rouge1: 0.4404
- Rouge2: 0.2412
- Rougel: 0.3768
- Rougelsum: 0.3769
- Gen Len: 18.977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 1
- seed: 46
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 2.2273 | 1.0 | 1250 | 1.9018 | 0.4342 | 0.2347 | 0.3676 | 0.3677 | 19.319 |
| 2.1445 | 2.0 | 2500 | 1.8668 | 0.4394 | 0.2388 | 0.3744 | 0.3743 | 18.977 |
| 2.0968 | 3.0 | 3750 | 1.8556 | 0.4406 | 0.2411 | 0.3767 | 0.3769 | 18.689 |
| 2.0883 | 4.0 | 5000 | 1.8502 | 0.4398 | 0.2391 | 0.3758 | 0.376 | 18.757 |
| 2.0638 | 5.0 | 6250 | 1.8393 | 0.4416 | 0.2406 | 0.3779 | 0.3777 | 18.88 |
| 2.0453 | 6.0 | 7500 | 1.8378 | 0.4404 | 0.2412 | 0.3768 | 0.3769 | 18.977 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.42.4
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
| null |
BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-summarization-medical-46
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8378
- Rouge1: 0.4404
- Rouge2: 0.2412
- Rougel: 0.3768
- Rougelsum: 0.3769
- Gen Len: 18.977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 1
- seed: 46
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 2.2273 | 1.0 | 1250 | 1.9018 | 0.4342 | 0.2347 | 0.3676 | 0.3677 | 19.319 |
| 2.1445 | 2.0 | 2500 | 1.8668 | 0.4394 | 0.2388 | 0.3744 | 0.3743 | 18.977 |
| 2.0968 | 3.0 | 3750 | 1.8556 | 0.4406 | 0.2411 | 0.3767 | 0.3769 | 18.689 |
| 2.0883 | 4.0 | 5000 | 1.8502 | 0.4398 | 0.2391 | 0.3758 | 0.376 | 18.757 |
| 2.0638 | 5.0 | 6250 | 1.8393 | 0.4416 | 0.2406 | 0.3779 | 0.3777 | 18.88 |
| 2.0453 | 6.0 | 7500 | 1.8378 | 0.4404 | 0.2412 | 0.3768 | 0.3769 | 18.977 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.42.4
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
{"base_model": "facebook/bart-large", "library_name": "peft", "license": "apache-2.0", "metrics": ["rouge"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bart-large-summarization-medical-46", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 42,292 |
projecte-aina/Plume256k
|
projecte-aina
|
translation
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"translation",
"en",
"gl",
"de",
"es",
"ca",
"it",
"fr",
"eu",
"pt",
"arxiv:2406.09140",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | 2024-06-05T18:43:02Z |
2024-10-04T13:32:24+00:00
| 113 | 6 |
---
language:
- en
- gl
- de
- es
- ca
- it
- fr
- eu
- pt
license: apache-2.0
metrics:
- comet
- bleu
pipeline_tag: translation
inference: false
---
# Plume256k
This is the model card of Plume (**P**arallel **L**ang**u**age **M**od**e**l) with a vocabulary size of 256k.
## Table of Contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [Intended uses and limitations](#intended-uses-and-limitations)
- [Run the model](#run-the-model)
- [Training](#training)
- [Evaluation](#evaluation)
- [Citation](#citation)
- [Additional information](#additional-information)
</details>
## Summary
Plume is the first LLM trained for Neural Machine Translation with only parallel Catalan-Centric data from scratch. It is a language model with the same architecture as Gemma 2B. The model is trained for general translation tasks at sentence level. For more information about training, architecture and interpretability of the model check out the paper; "Investigating the translation capabilities of Large Language Models trained on parallel data only". The preprint is available on [arXiv](https://arxiv.org/abs/2406.09140).
- **Developed by:** The Language Technologies Unit from Barcelona Supercomputing Center (BSC).
- **Languages:** Spanish, French, Italian, Portuguese, Galician, German, English, and Basque.
- **License:** Apache License, Version 2.0
## Model Description
In recent years, Large Language Models (LLMs) have demonstrated exceptional proficiency across a broad spectrum of Natural Language Processing (NLP) tasks, including Machine Translation. However, previous methodologies predominantly relied on iterative processes such as instruction fine-tuning or continual pre-training, leaving unexplored the challenges of training LLMs solely on parallel data. In this work, we introduce Plume (**P**arallel **L**ang**u**age **M**od**e**l), a collection of three 2B LLMs featuring varying vocabulary sizes (32k, 128k, and 256k) trained exclusively on Catalan-centric parallel examples. These models perform comparable to previous encoder-decoder architectures on 16 supervised translation directions and 56 zero-shot ones.
For more details regarding the model architecture, the dataset and model interpretability take a look at the [paper](https://arxiv.org/abs/2406.09140).
## Intended Uses and Limitations
The model is proficient in 16 supervised translation directions that include Catalan and is capable of translating in other 56 zero-shot directions as well.
At the time of submission, no measures have been taken to estimate the bias and added toxicity embedded in the model. However, we are aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
## Run the model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# language codes: spa_Latn (Spanish), cat_Latn (Catalan), eng_Latn (English), ita_Latn (Italian),
# eus_Latn (Basque), deu_Latn (German), por_Latn (Portuguese), glg_Latn (Galician), fra_Latn (French)
model_id = "projecte-aina/Plume256k"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
src_lang_code = 'spa_Latn'
tgt_lang_code = 'cat_Latn'
sentence = 'Ayer se fue, tomó sus cosas y se puso a navegar.'
prompt = '<s> [{}] {} \n[{}]'.format(src_lang_code, sentence, tgt_lang_code)
input_ids = tokenizer(prompt, return_tensors='pt').input_ids
output_ids = model.generate( input_ids, max_length=200, num_beams=5 )
input_length = input_ids.shape[1]
generated_text = tokenizer.decode(output_ids[0, input_length: ], skip_special_tokens=True).strip()
# Ahir se'n va anar, va agafar les seves coses i es va posar a navegar.
```
## Training
For training, the learning rate is warmed up from 1e-7 to a maximum of 3e-4 over the first 2000 steps. We apply a weight decay of 0.1 and a gradient clipping of 1.0. During training, we set an effective batch size of 81,920 tokens per gradient step distributed over 40 NVIDIA H100-64GB GPUs. We use DeepSpeed with full *float32* training. We show in the next table the training hyperparameters:
| **Hyper-Parameter** | **Value** |
|---------------------|--------------------------|
| Batch size | 40 |
| Number of Epochs | 1 |
| Optimizer | Adam |
| Adam-β₁ | 0.9 |
| Adam-β₂ | 0.999 |
| Adam-ε | 1e-08 |
| Learning rate | 3e-04 |
| LR Scheduler | Linear |
| Warmup Steps | 2000 |
More training details are specified in the [paper](https://arxiv.org/abs/2406.09140). Code for training the model and running other experiments can be found in our [GitHub repository](https://github.com/projecte-aina/Plume).
## Evaluation
Below are the evaluation results on Flores-200 and NTREX for supervised MT directions. For more details about model evaluation check out the [paper](https://arxiv.org/abs/2406.09140).
| Model | FLORES BLEU | FLORES COMET | NTREX BLEU | NTREX COMET |
|----------------------|-------------|--------------|------------|-------------|
| NLLB-1.3B | 31.02 | 0.86 | 29.68 | 0.85 |
| NLLB-600M | 29.24 | 0.85 | 28.37 | 0.84 |
| Bilinguals BSC | 31.93 | 0.86 | 29.77 | 0.84 |
| **Plume 32k** | 30.44 | 0.86 | 28.46 | 0.84 |
| **Plume 128k** | 30.81 | 0.86 | 28.78 | 0.84 |
| **Plume 256k** | 30.72 | 0.86 | 28.87 | 0.84 |
## Citation
```bibtex
@misc{gilabert2024investigating,
title={Investigating the translation capabilities of Large Language Models trained on parallel data only},
author={Javier García Gilabert and Carlos Escolano and Aleix Sant Savall and Francesca De Luca Fornaciari and Audrey Mash and Xixian Liao and Maite Melero},
year={2024},
eprint={2406.09140},
archivePrefix={arXiv}
}
```
## Additional information
### Author
The Language Technologies Unit from Barcelona Supercomputing Center.
### Contact
Feel free to write us at with any questions you may have to {javier.garcia1, carlos.escolano, aleix.santsavall, francesca.delucafornaciari, audrey.mash, xixian.liao, maite.melero}@bsc.es
### Copyright
Copyright(c) 2023 by Language Technologies Unit, Barcelona Supercomputing Center.
### License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work has been promoted and financed by the Government of Catalonia through the [Aina](https://projecteaina.cat/) project, by the Ministerio para la Transformación Digital y de la Función Pública and Plan de Recuperación, Transformación y Resiliencia - Funded by EU – NextGenerationEU within the framework of the project [ILENIA](https://proyectoilenia.es/) with reference 2022/TL22/00215337, as well as by [DeepR3](https://ixa2.si.ehu.eus/deepr3/) (TED2021-130295B-C32) founded by MCIN/AEI/10.13039/501100011033 and European Union NextGeneration EU/PRTR.
### Disclaimer
<details>
<summary>Click to expand</summary>
The model published in this repository is intended for a generalist purpose and is available to third parties under a permissive Apache License, Version 2.0.
Be aware that the model may have biases and/or any other undesirable distortions.
When third parties deploy or provide systems and/or services to other parties using this model (or any system based on it)
or become users of the model, they should note that it is their responsibility to mitigate the risks arising from its use and,
in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the model (Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties.
</details>
| null |
Non_BioNLP
|
# Plume256k
This is the model card of Plume (**P**arallel **L**ang**u**age **M**od**e**l) with a vocabulary size of 256k.
## Table of Contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [Intended uses and limitations](#intended-uses-and-limitations)
- [Run the model](#run-the-model)
- [Training](#training)
- [Evaluation](#evaluation)
- [Citation](#citation)
- [Additional information](#additional-information)
</details>
## Summary
Plume is the first LLM trained for Neural Machine Translation with only parallel Catalan-Centric data from scratch. It is a language model with the same architecture as Gemma 2B. The model is trained for general translation tasks at sentence level. For more information about training, architecture and interpretability of the model check out the paper; "Investigating the translation capabilities of Large Language Models trained on parallel data only". The preprint is available on [arXiv](https://arxiv.org/abs/2406.09140).
- **Developed by:** The Language Technologies Unit from Barcelona Supercomputing Center (BSC).
- **Languages:** Spanish, French, Italian, Portuguese, Galician, German, English, and Basque.
- **License:** Apache License, Version 2.0
## Model Description
In recent years, Large Language Models (LLMs) have demonstrated exceptional proficiency across a broad spectrum of Natural Language Processing (NLP) tasks, including Machine Translation. However, previous methodologies predominantly relied on iterative processes such as instruction fine-tuning or continual pre-training, leaving unexplored the challenges of training LLMs solely on parallel data. In this work, we introduce Plume (**P**arallel **L**ang**u**age **M**od**e**l), a collection of three 2B LLMs featuring varying vocabulary sizes (32k, 128k, and 256k) trained exclusively on Catalan-centric parallel examples. These models perform comparable to previous encoder-decoder architectures on 16 supervised translation directions and 56 zero-shot ones.
For more details regarding the model architecture, the dataset and model interpretability take a look at the [paper](https://arxiv.org/abs/2406.09140).
## Intended Uses and Limitations
The model is proficient in 16 supervised translation directions that include Catalan and is capable of translating in other 56 zero-shot directions as well.
At the time of submission, no measures have been taken to estimate the bias and added toxicity embedded in the model. However, we are aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
## Run the model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# language codes: spa_Latn (Spanish), cat_Latn (Catalan), eng_Latn (English), ita_Latn (Italian),
# eus_Latn (Basque), deu_Latn (German), por_Latn (Portuguese), glg_Latn (Galician), fra_Latn (French)
model_id = "projecte-aina/Plume256k"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
src_lang_code = 'spa_Latn'
tgt_lang_code = 'cat_Latn'
sentence = 'Ayer se fue, tomó sus cosas y se puso a navegar.'
prompt = '<s> [{}] {} \n[{}]'.format(src_lang_code, sentence, tgt_lang_code)
input_ids = tokenizer(prompt, return_tensors='pt').input_ids
output_ids = model.generate( input_ids, max_length=200, num_beams=5 )
input_length = input_ids.shape[1]
generated_text = tokenizer.decode(output_ids[0, input_length: ], skip_special_tokens=True).strip()
# Ahir se'n va anar, va agafar les seves coses i es va posar a navegar.
```
## Training
For training, the learning rate is warmed up from 1e-7 to a maximum of 3e-4 over the first 2000 steps. We apply a weight decay of 0.1 and a gradient clipping of 1.0. During training, we set an effective batch size of 81,920 tokens per gradient step distributed over 40 NVIDIA H100-64GB GPUs. We use DeepSpeed with full *float32* training. We show in the next table the training hyperparameters:
| **Hyper-Parameter** | **Value** |
|---------------------|--------------------------|
| Batch size | 40 |
| Number of Epochs | 1 |
| Optimizer | Adam |
| Adam-β₁ | 0.9 |
| Adam-β₂ | 0.999 |
| Adam-ε | 1e-08 |
| Learning rate | 3e-04 |
| LR Scheduler | Linear |
| Warmup Steps | 2000 |
More training details are specified in the [paper](https://arxiv.org/abs/2406.09140). Code for training the model and running other experiments can be found in our [GitHub repository](https://github.com/projecte-aina/Plume).
## Evaluation
Below are the evaluation results on Flores-200 and NTREX for supervised MT directions. For more details about model evaluation check out the [paper](https://arxiv.org/abs/2406.09140).
| Model | FLORES BLEU | FLORES COMET | NTREX BLEU | NTREX COMET |
|----------------------|-------------|--------------|------------|-------------|
| NLLB-1.3B | 31.02 | 0.86 | 29.68 | 0.85 |
| NLLB-600M | 29.24 | 0.85 | 28.37 | 0.84 |
| Bilinguals BSC | 31.93 | 0.86 | 29.77 | 0.84 |
| **Plume 32k** | 30.44 | 0.86 | 28.46 | 0.84 |
| **Plume 128k** | 30.81 | 0.86 | 28.78 | 0.84 |
| **Plume 256k** | 30.72 | 0.86 | 28.87 | 0.84 |
## Citation
```bibtex
@misc{gilabert2024investigating,
title={Investigating the translation capabilities of Large Language Models trained on parallel data only},
author={Javier García Gilabert and Carlos Escolano and Aleix Sant Savall and Francesca De Luca Fornaciari and Audrey Mash and Xixian Liao and Maite Melero},
year={2024},
eprint={2406.09140},
archivePrefix={arXiv}
}
```
## Additional information
### Author
The Language Technologies Unit from Barcelona Supercomputing Center.
### Contact
Feel free to write us at with any questions you may have to {javier.garcia1, carlos.escolano, aleix.santsavall, francesca.delucafornaciari, audrey.mash, xixian.liao, maite.melero}@bsc.es
### Copyright
Copyright(c) 2023 by Language Technologies Unit, Barcelona Supercomputing Center.
### License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work has been promoted and financed by the Government of Catalonia through the [Aina](https://projecteaina.cat/) project, by the Ministerio para la Transformación Digital y de la Función Pública and Plan de Recuperación, Transformación y Resiliencia - Funded by EU – NextGenerationEU within the framework of the project [ILENIA](https://proyectoilenia.es/) with reference 2022/TL22/00215337, as well as by [DeepR3](https://ixa2.si.ehu.eus/deepr3/) (TED2021-130295B-C32) founded by MCIN/AEI/10.13039/501100011033 and European Union NextGeneration EU/PRTR.
### Disclaimer
<details>
<summary>Click to expand</summary>
The model published in this repository is intended for a generalist purpose and is available to third parties under a permissive Apache License, Version 2.0.
Be aware that the model may have biases and/or any other undesirable distortions.
When third parties deploy or provide systems and/or services to other parties using this model (or any system based on it)
or become users of the model, they should note that it is their responsibility to mitigate the risks arising from its use and,
in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the model (Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties.
</details>
|
{"language": ["en", "gl", "de", "es", "ca", "it", "fr", "eu", "pt"], "license": "apache-2.0", "metrics": ["comet", "bleu"], "pipeline_tag": "translation", "inference": false}
|
task
|
[
"TRANSLATION"
] | 42,294 |
mridul3301/distilbart-cnn-12-6-finetuned-30k-3epoch
|
mridul3301
|
text2text-generation
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:arxiv_summarization_dataset",
"base_model:sshleifer/distilbart-cnn-12-6",
"base_model:finetune:sshleifer/distilbart-cnn-12-6",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-08-21T15:04:39Z |
2023-09-14T06:56:12+00:00
| 32 | 0 |
---
base_model: sshleifer/distilbart-cnn-12-6
datasets:
- arxiv_summarization_dataset
license: apache-2.0
metrics:
- rouge
tags:
- generated_from_trainer
model-index:
- name: distilbart-cnn-12-6-finetuned-30k-3epoch
results:
- task:
type: text2text-generation
name: Sequence-to-sequence Language Modeling
dataset:
name: arxiv_summarization_dataset
type: arxiv_summarization_dataset
config: section
split: test[:2000]
args: section
metrics:
- type: rouge
value: 43.696
name: Rouge1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-12-6-finetuned-30k-3epoch
This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on the arxiv_summarization_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3411
- Rouge1: 43.696
- Rouge2: 15.6681
- Rougel: 25.6889
- Rougelsum: 38.574
- Gen Len: 121.98
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 2.7304 | 1.0 | 3750 | 2.4322 | 43.0913 | 15.1302 | 25.2555 | 38.0346 | 122.3755 |
| 2.3518 | 2.0 | 7500 | 2.3613 | 43.8799 | 15.6977 | 25.6984 | 38.7646 | 122.6945 |
| 2.2318 | 3.0 | 11250 | 2.3411 | 43.696 | 15.6681 | 25.6889 | 38.574 | 121.98 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-12-6-finetuned-30k-3epoch
This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on the arxiv_summarization_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3411
- Rouge1: 43.696
- Rouge2: 15.6681
- Rougel: 25.6889
- Rougelsum: 38.574
- Gen Len: 121.98
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 2.7304 | 1.0 | 3750 | 2.4322 | 43.0913 | 15.1302 | 25.2555 | 38.0346 | 122.3755 |
| 2.3518 | 2.0 | 7500 | 2.3613 | 43.8799 | 15.6977 | 25.6984 | 38.7646 | 122.6945 |
| 2.2318 | 3.0 | 11250 | 2.3411 | 43.696 | 15.6681 | 25.6889 | 38.574 | 121.98 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
{"base_model": "sshleifer/distilbart-cnn-12-6", "datasets": ["arxiv_summarization_dataset"], "license": "apache-2.0", "metrics": ["rouge"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbart-cnn-12-6-finetuned-30k-3epoch", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "arxiv_summarization_dataset", "type": "arxiv_summarization_dataset", "config": "section", "split": "test[:2000]", "args": "section"}, "metrics": [{"type": "rouge", "value": 43.696, "name": "Rouge1"}]}]}]}
|
task
|
[
"SUMMARIZATION"
] | 42,295 |
QuantFactory/Mistral-7B-Instruct-Uz-GGUF
|
QuantFactory
|
text-generation
|
[
"transformers",
"gguf",
"text-generation-inference",
"summarization",
"translation",
"question-answering",
"text-generation",
"uz",
"en",
"dataset:tahrirchi/uz-crawl",
"dataset:allenai/c4",
"dataset:MLDataScientist/Wikipedia-uzbek-2024-05-01",
"dataset:yahma/alpaca-cleaned",
"dataset:behbudiy/alpaca-cleaned-uz",
"dataset:behbudiy/translation-instruction",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:quantized:mistralai/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-09-20T05:31:15Z |
2024-09-20T06:08:21+00:00
| 258 | 2 |
---
base_model: mistralai/Mistral-7B-Instruct-v0.3
datasets:
- tahrirchi/uz-crawl
- allenai/c4
- MLDataScientist/Wikipedia-uzbek-2024-05-01
- yahma/alpaca-cleaned
- behbudiy/alpaca-cleaned-uz
- behbudiy/translation-instruction
language:
- uz
- en
library_name: transformers
license: apache-2.0
metrics:
- bleu
- comet
- accuracy
pipeline_tag: text-generation
tags:
- text-generation-inference
- summarization
- translation
- question-answering
---
[](https://hf.co/QuantFactory)
# QuantFactory/Mistral-7B-Instruct-Uz-GGUF
This is quantized version of [behbudiy/Mistral-7B-Instruct-Uz](https://huggingface.co/behbudiy/Mistral-7B-Instruct-Uz) created using llama.cpp
# Original Model Card
### Model Description
The Mistral-7B-Instruct-Uz model has been continually pre-trained and instruction-tuned using a mix of publicly available and syntheticly constructed Uzbek and English data to preserve its original knowledge while enhancing its capabilities. This model is designed to support various natural language processing tasks in Uzbek, such as machine translation, summarization, and dialogue systems, ensuring robust performance across these applications.
For details regarding the performance metrics compared to the base model, see [this post.](https://www.linkedin.com/feed/update/urn:li:activity:7241389815559008256/)
- **Developed by:**
- [Eldor Fozilov](https://www.linkedin.com/in/eldor-fozilov/)
- [Azimjon Urinov](https://azimjonn.github.io/)
- [Khurshid Juraev](https://kjuraev.com/)
## Installation
It is recommended to use `behbudiy/Mistral-7B-Instruct-Uz` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling.
```
pip install mistral_inference
```
## Download
```py
from huggingface_hub import snapshot_download
from pathlib import Path
mistral_models_path = Path.home().joinpath('mistral_models', '7B-Instruct-Uz')
mistral_models_path.mkdir(parents=True, exist_ok=True)
snapshot_download(repo_id="behbudiy/Mistral-7B-Instruct-Uz", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path)
```
### Chat
After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. You can chat with the model using
```
mistral-chat $HOME/mistral_models/7B-Instruct-Uz --instruct --max_tokens 256
```
### Instructiong Following
```py
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
model = Transformer.from_folder(mistral_models_path)
completion_request = ChatCompletionRequest(messages=[UserMessage(content="O'zbekiston haqida ma'lumot ber.")])
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
print(result)
```
## Generate with `transformers`
If you want to use Hugging Face `transformers` to generate text, you can do something like this.
```py
from transformers import pipeline
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
chatbot = pipeline("text-generation", model="behbudiy/Mistral-7B-Instruct-Uz", device='cuda')
chatbot(messages)
```
## Information on Evaluation Method
To evaluate on the translation task, we used FLORES+ Uz-En / En-Uz datasets, where we merged the dev and test sets to create a bigger evaluation data for each Uz-En and En-Uz subsets.
We used the following prompt to do one-shot Uz-En evaluation both for the base model and Uzbek-optimized model (for En-Uz eval, we changed the positions of the words "English" and "Uzbek").
```python
prompt = f'''You are a professional Uzbek-English translator. Your task is to accurately translate the given Uzbek text into English.
Instructions:
1. Translate the text from Uzbek to English.
2. Maintain the original meaning and tone.
3. Use appropriate English grammar and vocabulary.
4. If you encounter an ambiguous or unfamiliar word, provide the most likely translation based on context.
5. Output only the English translation, without any additional comments.
Example:
Uzbek: "Bugun ob-havo juda yaxshi, quyosh charaqlab turibdi."
English: "The weather is very nice today, the sun is shining brightly."
Now, please translate the following Uzbek text into English:
"{sentence}"
'''
```
To assess the model's ability in Uzbek sentiment analysis, we used the **risqaliyevds/uzbek-sentiment-analysis** dataset, for which we created binary labels (0: Negative, 1: Positive) using GPT-4o API (refer to **behbudiy/uzbek-sentiment-analysis** dataset).
We used the following prompt for the evaluation:
```python
prompt = f'''Given the following text, determine the sentiment as either 'Positive' or 'Negative.' Respond with only the word 'Positive' or 'Negative' without any additional text or explanation.
Text: {text}"
'''
```
For Uzbek News Classification, we used **risqaliyevds/uzbek-zero-shot-classification** dataset and asked the model to predict the category of the news using the following prompt:
```python
prompt = f'''Classify the given Uzbek news article into one of the following categories. Provide only the category number as the answer.
Categories:
0 - Politics (Siyosat)
1 - Economy (Iqtisodiyot)
2 - Technology (Texnologiya)
3 - Sports (Sport)
4 - Culture (Madaniyat)
5 - Health (Salomatlik)
6 - Family and Society (Oila va Jamiyat)
7 - Education (Ta'lim)
8 - Ecology (Ekologiya)
9 - Foreign News (Xorijiy Yangiliklar)
Now classify this article:
"{text}"
Answer (number only):"
'''
```
## MMLU
We used [this script](https://github.com/FranxYao/chain-of-thought-hub/blob/461e2d551f3f12d54caee75fa1e915fdbc3e9d12/MMLU/run_mmlu_open_source.py).
## More
For more details and examples, refer to the base model below:
https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3
| null |
Non_BioNLP
|
[](https://hf.co/QuantFactory)
# QuantFactory/Mistral-7B-Instruct-Uz-GGUF
This is quantized version of [behbudiy/Mistral-7B-Instruct-Uz](https://huggingface.co/behbudiy/Mistral-7B-Instruct-Uz) created using llama.cpp
# Original Model Card
### Model Description
The Mistral-7B-Instruct-Uz model has been continually pre-trained and instruction-tuned using a mix of publicly available and syntheticly constructed Uzbek and English data to preserve its original knowledge while enhancing its capabilities. This model is designed to support various natural language processing tasks in Uzbek, such as machine translation, summarization, and dialogue systems, ensuring robust performance across these applications.
For details regarding the performance metrics compared to the base model, see [this post.](https://www.linkedin.com/feed/update/urn:li:activity:7241389815559008256/)
- **Developed by:**
- [Eldor Fozilov](https://www.linkedin.com/in/eldor-fozilov/)
- [Azimjon Urinov](https://azimjonn.github.io/)
- [Khurshid Juraev](https://kjuraev.com/)
## Installation
It is recommended to use `behbudiy/Mistral-7B-Instruct-Uz` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling.
```
pip install mistral_inference
```
## Download
```py
from huggingface_hub import snapshot_download
from pathlib import Path
mistral_models_path = Path.home().joinpath('mistral_models', '7B-Instruct-Uz')
mistral_models_path.mkdir(parents=True, exist_ok=True)
snapshot_download(repo_id="behbudiy/Mistral-7B-Instruct-Uz", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path)
```
### Chat
After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. You can chat with the model using
```
mistral-chat $HOME/mistral_models/7B-Instruct-Uz --instruct --max_tokens 256
```
### Instructiong Following
```py
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
model = Transformer.from_folder(mistral_models_path)
completion_request = ChatCompletionRequest(messages=[UserMessage(content="O'zbekiston haqida ma'lumot ber.")])
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
print(result)
```
## Generate with `transformers`
If you want to use Hugging Face `transformers` to generate text, you can do something like this.
```py
from transformers import pipeline
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
chatbot = pipeline("text-generation", model="behbudiy/Mistral-7B-Instruct-Uz", device='cuda')
chatbot(messages)
```
## Information on Evaluation Method
To evaluate on the translation task, we used FLORES+ Uz-En / En-Uz datasets, where we merged the dev and test sets to create a bigger evaluation data for each Uz-En and En-Uz subsets.
We used the following prompt to do one-shot Uz-En evaluation both for the base model and Uzbek-optimized model (for En-Uz eval, we changed the positions of the words "English" and "Uzbek").
```python
prompt = f'''You are a professional Uzbek-English translator. Your task is to accurately translate the given Uzbek text into English.
Instructions:
1. Translate the text from Uzbek to English.
2. Maintain the original meaning and tone.
3. Use appropriate English grammar and vocabulary.
4. If you encounter an ambiguous or unfamiliar word, provide the most likely translation based on context.
5. Output only the English translation, without any additional comments.
Example:
Uzbek: "Bugun ob-havo juda yaxshi, quyosh charaqlab turibdi."
English: "The weather is very nice today, the sun is shining brightly."
Now, please translate the following Uzbek text into English:
"{sentence}"
'''
```
To assess the model's ability in Uzbek sentiment analysis, we used the **risqaliyevds/uzbek-sentiment-analysis** dataset, for which we created binary labels (0: Negative, 1: Positive) using GPT-4o API (refer to **behbudiy/uzbek-sentiment-analysis** dataset).
We used the following prompt for the evaluation:
```python
prompt = f'''Given the following text, determine the sentiment as either 'Positive' or 'Negative.' Respond with only the word 'Positive' or 'Negative' without any additional text or explanation.
Text: {text}"
'''
```
For Uzbek News Classification, we used **risqaliyevds/uzbek-zero-shot-classification** dataset and asked the model to predict the category of the news using the following prompt:
```python
prompt = f'''Classify the given Uzbek news article into one of the following categories. Provide only the category number as the answer.
Categories:
0 - Politics (Siyosat)
1 - Economy (Iqtisodiyot)
2 - Technology (Texnologiya)
3 - Sports (Sport)
4 - Culture (Madaniyat)
5 - Health (Salomatlik)
6 - Family and Society (Oila va Jamiyat)
7 - Education (Ta'lim)
8 - Ecology (Ekologiya)
9 - Foreign News (Xorijiy Yangiliklar)
Now classify this article:
"{text}"
Answer (number only):"
'''
```
## MMLU
We used [this script](https://github.com/FranxYao/chain-of-thought-hub/blob/461e2d551f3f12d54caee75fa1e915fdbc3e9d12/MMLU/run_mmlu_open_source.py).
## More
For more details and examples, refer to the base model below:
https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3
|
{"base_model": "mistralai/Mistral-7B-Instruct-v0.3", "datasets": ["tahrirchi/uz-crawl", "allenai/c4", "MLDataScientist/Wikipedia-uzbek-2024-05-01", "yahma/alpaca-cleaned", "behbudiy/alpaca-cleaned-uz", "behbudiy/translation-instruction"], "language": ["uz", "en"], "library_name": "transformers", "license": "apache-2.0", "metrics": ["bleu", "comet", "accuracy"], "pipeline_tag": "text-generation", "tags": ["text-generation-inference", "summarization", "translation", "question-answering"]}
|
task
|
[
"TRANSLATION",
"SUMMARIZATION"
] | 42,296 |
seongil-dn/gte-gold-bs64
|
seongil-dn
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"new",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:5376",
"loss:MultipleNegativesRankingLoss",
"custom_code",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:Alibaba-NLP/gte-multilingual-mlm-base",
"base_model:finetune:Alibaba-NLP/gte-multilingual-mlm-base",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-11-14T12:10:21Z |
2024-11-14T12:11:21+00:00
| 6 | 0 |
---
base_model: Alibaba-NLP/gte-multilingual-mlm-base
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:5376
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: 세종특별자치시의 행정구역은 어떻게 되나요?
sentences:
- 잉글랜드 주민들이 앵글로색슨족인 반면 스코틀랜드 주민들은 아일랜드나 프랑스 북부 브르타뉴, 스페인 북부 갈리시아와 같은 켈트족이다. 따라서
잉글랜드와는 문화가 다르며, 국민성도 다르다. 언어는 원래 영어와 다른 스코틀랜드 게일어, 그리고 스코트어를 사용하였으나 영어의 영향 및 스코틀랜드문화에
대한 잉글랜드의 탄압으로 사용인구가 줄어들었다. 스코트어는 영어와 계통이 같은 게르만어파로서 중세영어의 모습을 많이 보존하고 있다. 스코틀랜드
사람들의 영어도 스코트어가 혼합되어 있는데, 발음도 잉글랜드 발음과 많이 다르다. 예를 들어 잉글랜드 사람들은 모음 다음에 나오는 알(R)을
발음하지 않는 반면 스코틀랜드 사람들은 미국이나 캐나다 사람들처럼 알(R)을 분명히 발음한다. 스코틀랜드 게일어는 게르만어파와 전혀 다른 켈트어파에
속하는 켈트족의 순수한 언어로서 현재 사용인구가 1000여명밖에 남아있지 않아 사멸위기에 놓여있다. 2005년에 공용어로 지정되었다.2005년
총선거 이전까지 스코틀랜드에는 영국 의회의 하원 의석 가운데 72석이 할당되어 있었다. 이것은 연합왕국을 구성하는 다른 지역에 비해 의석이
과다 할당된 것이었다. 1998년 스코틀랜드법의 81조를 통해 스코틀랜드의 선거구를 잉글랜드와 균등하게 조정하였다. 그 결과 스코틀랜드에 할당된
의석을 57석으로 줄이는 스코틀랜드 선거구 조정 위원회의 권고가 2005년 총선거에서부터 적용되었다. 이러한 의석수 삭감을 위해 영국 의회에서는
1998년 스코틀랜드법에 대한 수정안이 2004년 스코틀랜드 선거구법으로 통과되었다. 과도할당된 의석을 바탕으로 스코틀랜드는 영국 하원 내에서의
큰 발언권을 갖고 있었으나, 스코틀랜드 의회가 부활되고 난 이후에는 그러한 발언권이 굳이 필요하지 않게 되었다는 점도 관련된다.
- 새만금지역은 행정구역상 2시1군 19개 읍·면·동으로 군산시, 김제시, 부안군에 걸쳐 총 401km에 이르며 뉴욕 맨해튼의 5배, 서울특별시
면적의 3분의2, 행정복합도시로 새롭게 탄생되는 세종특별자치시 면적의 5.7배에 달한다.
- 세종특별자치시의 행정 구역은 1읍, 9면, 9행정동(14개 법정동)으로 구성되어 있으며, 2018년 6월 말 주민등록 기준으로 인구는 30만0332
명, 11만7734 가구이다.
- source_sentence: 니체는 어느 나라 사람인가?
sentences:
- 성리학(性理學) 12세기에 남송의 주희(朱熹)가 집대성한 유교의 주류 학파이다. 성리학의 어원은 주희가 주창한 성즉리(性卽理)를 축약한 명칭이다.
- 니체는 1844년 10월 15일 예전의 프로이센 (독일)의 작센 지방의 작은 마을인 뢰켄(Röcken)에서 루터교 목사의 아들로 태어났다.
그의 이름은 프러시아의 왕인 프리드리히 빌헬름 4세에게서 빌려온 것으로, 빌헬름 4세는 니체가 태어나던 날에 나이가 49세를 넘어있었다(니체는
훗날 그의 이름에서 가운데에 있던 "빌헬름"을 빼 버렸다.) 니체의 아버지인 카를 빌헬름 루트비히 니체(1813-1849)는 루터교회 목사이자
전직 교사이었고, 프란치스카 욀러(1826~1897)와 1843년에 결혼하였다. 그의 여동생인 엘리자베스 니체는 1846년에 태어났고, 뒤를
이어 남동생인 루드비히 요셉이 1848년에 태어났다. 니체의 아버지는 뇌 질환으로 1849년에 세상을 떠났다. 그의 어린 남동생은 1850년에
죽었다. 그 후 가족은 나움부르크로 이사를 갔고, 그곳에서 니체의 할머니와 어머니 프란치스카, 아버지의 결혼하지 않은 두 자매, 두 하녀들과
함께 살며 어린시절을 보냈다. 니체의 할머니가 1856년에 세상을 하직하자, 가족은 그들의 집으로 이사했다.
- 19세기 철학자 프리드리히 니체는 '신은 죽었다.'라는 발언으로 잘 알려진 명사이다. 그러나 이 발언은 니체가 직접 한 발언이 아니라 그의
작품 속 인물들 사이의 대화에서 나온 문장이다. 니체는 기독교적 유신론이 한 신앙 체제로서 서구 세계의 도덕적 근원이였으나 현대 사상의 결과
즉 허무주의의 부상과 기독교적 가치의 하락으로 붕괴되고 거부되었다고 논증하였다. 니체는 굳은 무신론자로서 허무주의와 인본주의의 부작용을 걱정하기도
하였다. 따라서 그는 새로운 사회를 요구하였는데, 그것은 바로 초인 중심의 사회였다.
- source_sentence: 돼지의 번식은 태어난 지 몇개월 정도 되면 짝짓기를 할 수 있나요?
sentences:
- 워싱턴 기념탑(555피트, 약 169.3m)의 높이 이상으로 건축물을 건설하지 못하도록 하여, 뉴욕와 달리 초고층 건물이 드물다. 백악관,
펜타곤, 연방 의사당 등 미국의 주요 정부 기관이 있다. 정식 연방주가 아니므로 상원 의석은 없으나, 표결권이 없는 미국 하원 파견대표 1명
(2019년 현재 민주당의 Eleanor Holmes Norton의원)과 대통령선거인단 3표를 확보하고 있다.
- 두더지는 태어난 지 2-3년이면 새끼를 낳을 수 있다. 3-4월에 짝짓기를 하고 1년에 1회, 2-7마리의 새끼를 낳는다. 성장이 매우 빨라서
6개월이면 앞발을 제외하고 어미와 비슷한 크기로 자란다. 수명은 5년 정도이다.
- 돼지는 번식이 매우 빨라서 태어난 지 8개월 정도가 되면 짝짓기를 할 수 있다. 임신기간은 114일이고 보통 한배에 8-12마리를 낳는다.
갓 태어난 새끼돼지는 약 1kg가량이고, 1주일이 지나면 두 배로 늘어난다. 다 자란 수퇘지는 230 kg 이상 되며, 암퇘지도 200kg이
넘는다. 돼지가 완전히 자라기 위해서는 1년 반에서 2년이 걸리며 수명은 9-15년이다. 돼지는 오랜 시간과 노력을 거친 품종 개량으로 오늘날
수많은 종에 이르렀다.이슬람교와 유대교의 발원지인 서아시아가 고온건조한 사막기후여서 잡식동물인 데다가 자주 씻는 돼지는 키우기 부적합한 동물이다.
게다가 비활동적인 성격때문에 불결한 짐승으로 여겨졌다. 지금도 이슬람교도와 유대인들은 돼지고기를 먹지 않는 전통이 있다.
- source_sentence: 인간은 언제 달에 착륙했는가?
sentences:
- 세종특별자치시의 행정 구역은 1읍, 9면, 9행정동(14개 법정동)으로 구성되어 있으며, 2018년 6월 말 주민등록 기준으로 인구는 30만0332
명, 11만7734 가구이다.
- 아폴로 11호(Apollo 11)는 처음으로 달에 착륙한 유인 우주선이다. 아폴로 계획의 다섯 번째 유인우주비행인 동시에 세 번째 유인 달
탐사이기도 했다. 1969년 7월 16일에 발사되었으며 선장 닐 암스트롱, 사령선 조종사 마이클 콜린스, 달 착륙선 조종사 버즈 올드린이 탔다.
7월 20일 암스트롱과 올드린은 달에 발을 딛은 최초의 인류가 되었다. 당시 콜린스는 달 궤도를 돌고 있었다.
- 1969년 7월 16일 13시 32분 UTC에 플로리다 주 케네디 우주 센터에서 새턴 5호 로켓으로 발사되었다. NASA의 5번째 아폴로 프로그램
유인우주선 임무였다. 7월 20일 20시 17분 UTC에 달착륙선이 달의 표면에 착륙했다. 선장은 닐 암스트롱 중위, 조종사는 버즈 올드린이었다.
달에 착륙하고 6시간이 지나서 7월 21일 02시 56분 15초 UTC에 암스트롱이 달에 내렸다. 버즈 올드린은 20분 후에 내려왔다. 2시간
30분을 달에서 걸어다니면서 21.5 kg의 달 모래 등을 채집했다. 달 궤도를 돌고 있는 사령선에는 마이클 콜린스 (우주비행사)가 혼자 남아있었다.
달에 착륙하고 21.5 시간이 지나서 달착륙선이 이륙해 사령선과 도킹했다.콜린스는 원래 아폴로 8호 때 사령선 조종사를 맡기로 되어 있었으나,
등 부상으로 인해 탈락하고 짐 러벨이 대신 임무를 맡았다. 또한 아폴로 11호 달 착륙선 조종사를 맡기로 되어 있었으나, 사령선 조종사는 우주
비행 경험이 있는 사람만 할 수 있기 때문에 제미니 계획에서 우주비행을 한 경험이 있던 콜린스 비행사가 사령선 조종사로 일종의 '진급'을 하게
되었다.달 착륙선은 미국의 국장이자 아폴로 11호의 기장 마스코트인 흰머리수리에서 착안, 이글("Eagle")로 이름지어졌다. 사령선의 이름은
아폴로 11호 승무원들이 자신들의 달 탐험을 미 대륙을 발견했던 콜롬버스의 이름을 따서 콜럼비아("Columbia")로 이름이 지어졌다. 미국
항공우주국 내부 문서에는 달 착륙선과 사령선을 각각 빙수("Snowcone")와 건초가리("Haystack")로 부를 계획이 잡혀 있었으나,
이들 이름은 언론에 계획이 공표되기 전에 조용히 바뀌었다.
- source_sentence: 인간은 언제 달에 착륙했는가?
sentences:
- 아폴로 11호(Apollo 11)는 처음으로 달에 착륙한 유인 우주선이다. 아폴로 계획의 다섯 번째 유인우주비행인 동시에 세 번째 유인 달
탐사이기도 했다. 1969년 7월 16일에 발사되었으며 선장 닐 암스트롱, 사령선 조종사 마이클 콜린스, 달 착륙선 조종사 버즈 올드린이 탔다.
7월 20일 암스트롱과 올드린은 달에 발을 딛은 최초의 인류가 되었다. 당시 콜린스는 달 궤도를 돌고 있었다.
- 가톨릭 신자들은 모든 주일 (그리스도교)과 더불어 의무 대축일에 해당하는 천주의 성모 마리아 대축일(1월 1일), 주님 부활 대축일, 성모
승천 대축일(8월 15일), 주님 성탄 대축일(12월 25일)에 미사에 참례해야 하는 것이 기본적인 의무이다.
- 사일런스는 닥터를 죽이기 위한 계획의 일환으로 우주복이 필요했으며, 전 인류에 걸쳐 '암시 능력'을 이용해 인류가 달에 가기 위한 연구를 하게
만들고 그 결과 인간이 만들어낸 우주복을 훔쳐 각종 최신 과학기술력을 탑재하여 개조한다. 하지만 사일런스가 "인간은 우릴 보고있을 때만 죽일
수 있다." 라고 말한 장면을 닥터가 아폴로 우주선의 송신 장치에 붙여놓아 아폴로 우주선이 달 착륙할때 TV 화면을 보고있던 전 세계 사람들에게
'사일런스를 죽여라'라는 암시가 걸리고 그 결과 사일런스는 1969년를 기점으로 더이상 인류에게 암시를 하지 못하게 되었다.
---
# SentenceTransformer based on Alibaba-NLP/gte-multilingual-mlm-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-multilingual-mlm-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-mlm-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Alibaba-NLP/gte-multilingual-mlm-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-mlm-base) <!-- at revision b747c5e8eb09e48c24eb3d4e48f80a79a18889ff -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("seongil-dn/gte-gold-bs64")
# Run inference
sentences = [
'인간은 언제 달에 착륙했는가?',
'아폴로 11호(Apollo 11)는 처음으로 달에 착륙한 유인 우주선이다. 아폴로 계획의 다섯 번째 유인우주비행인 동시에 세 번째 유인 달 탐사이기도 했다. 1969년 7월 16일에 발사되었으며 선장 닐 암스트롱, 사령선 조종사 마이클 콜린스, 달 착륙선 조종사 버즈 올드린이 탔다. 7월 20일 암스트롱과 올드린은 달에 발을 딛은 최초의 인류가 되었다. 당시 콜린스는 달 궤도를 돌고 있었다.',
'사일런스는 닥터를 죽이기 위한 계획의 일환으로 우주복이 필요했으며, 전 인류에 걸쳐 \'암시 능력\'을 이용해 인류가 달에 가기 위한 연구를 하게 만들고 그 결과 인간이 만들어낸 우주복을 훔쳐 각종 최신 과학기술력을 탑재하여 개조한다. 하지만 사일런스가 "인간은 우릴 보고있을 때만 죽일 수 있다." 라고 말한 장면을 닥터가 아폴로 우주선의 송신 장치에 붙여놓아 아폴로 우주선이 달 착륙할때 TV 화면을 보고있던 전 세계 사람들에게 \'사일런스를 죽여라\'라는 암시가 걸리고 그 결과 사일런스는 1969년를 기점으로 더이상 인류에게 암시를 하지 못하게 되었다.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 5,376 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 14.66 tokens</li><li>max: 83 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 151.18 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 169.71 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>로마의 면적은 서울시의 2배인가요?</code> | <code>로마()는 이탈리아의 수도이자 라치오주의 주도로, 테베레 강 연안에 있다. 로마시의 행정구역 면적은 1,285km로 서울시의 2배정도이고, 2014년 인구는 290여만명이다. 로마시 권역의 인구는 430여만명이다. 로마 대도시현의 인구는 400만이 넘지만 밀라노나 나폴리 대도시현에 비해 면적이 3~4배 넓은 편이고 되려 로마시의 면적과 밀라노와 나폴리의 대도시현의 면적이 비슷하므로 세 도시 모두 300만 정도로 비슷한 규모의 도시라 볼 수 있다.</code> | <code>도봉구는 서울시청으로부터 약12km 동북부에 구의 중심인 방학동이 위치하며, 구 전체면적은 20.84km로 서울특별시 면적의 3.4%를 차지하고 있다. 도봉구 면적 중에서 가장 많이 차지하는 부분은 북한산국립공원을 비롯한 공원으로, 구면적의 48.2%인 10.05km에 달하고 있다. 서울시의 최북단에 위치한 도봉구는 동쪽으로 노원구 상계동과, 서쪽은 강북구 수유동·우이동과, 남쪽은 노원구 월계동 및 강북구 번동과 북쪽은 의정부시 호원동 등과 접하고 있는 서울 동북부의 관문 지역이다.</code> |
| <code>로마의 면적은 서울시의 2배인가요?</code> | <code>로마()는 이탈리아의 수도이자 라치오주의 주도로, 테베레 강 연안에 있다. 로마시의 행정구역 면적은 1,285km로 서울시의 2배정도이고, 2014년 인구는 290여만명이다. 로마시 권역의 인구는 430여만명이다. 로마 대도시현의 인구는 400만이 넘지만 밀라노나 나폴리 대도시현에 비해 면적이 3~4배 넓은 편이고 되려 로마시의 면적과 밀라노와 나폴리의 대도시현의 면적이 비슷하므로 세 도시 모두 300만 정도로 비슷한 규모의 도시라 볼 수 있다.</code> | <code>신안군(新安郡)은 유인도 72개와 무인도 932개로 이뤄져 있다. 섬의 면적만 (655km)에 달하고, 바다와 육지 넓이를 더한 신안군의 면적은 서울시의 22배나 된다. 이런 넓은 지역을 36곳의 치안센터와 파출소에 근무하는 목포경찰서 소속 경찰관 100여명이 관리해, 이전부터 치안 공백을 우려하는 주민들의 지적이 많았다. 신안군 한 사회단체 관계자는 "신안에 경찰서가 있었다면 염전 종사자 관리감독이 이처럼 방관 상태까지 이르지 않았을 것이다"고 주장했다.</code> |
| <code>로마의 면적은 서울시의 2배인가요?</code> | <code>로마()는 이탈리아의 수도이자 라치오주의 주도로, 테베레 강 연안에 있다. 로마시의 행정구역 면적은 1,285km로 서울시의 2배정도이고, 2014년 인구는 290여만명이다. 로마시 권역의 인구는 430여만명이다. 로마 대도시현의 인구는 400만이 넘지만 밀라노나 나폴리 대도시현에 비해 면적이 3~4배 넓은 편이고 되려 로마시의 면적과 밀라노와 나폴리의 대도시현의 면적이 비슷하므로 세 도시 모두 300만 정도로 비슷한 규모의 도시라 볼 수 있다.</code> | <code>로마는 2015년 1월 1일부로 로마 수도 광역시의 행정 중심지가 되었다. 이 로마 수도 광역시는 로마 광역권에 북쪽으로 치비타베키아까지 뻗어나갔던 구 로마현을 대체했다. 로마 수도 광역시의 면적은 총 5,353제곱미터로 이탈리아에서 가장 크며, 리구리아주에 맞먹는다. 이와 더불어 로마는 라치오주의 주도이기도 하다.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 64
- `gradient_accumulation_steps`: 8
- `learning_rate`: 0.0001
- `adam_epsilon`: 1e-07
- `num_train_epochs`: 10
- `warmup_ratio`: 0.1
- `fp16`: True
- `dataloader_drop_last`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 8
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 0.0001
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-07
- `max_grad_norm`: 1.0
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.0952 | 1 | 5.6584 |
| 0.1905 | 2 | 5.6663 |
| 0.2857 | 3 | 5.2883 |
| 0.3810 | 4 | 5.5523 |
| 0.4762 | 5 | 5.5037 |
| 0.5714 | 6 | 5.1176 |
| 0.6667 | 7 | 4.9949 |
| 0.7619 | 8 | 5.0314 |
| 0.8571 | 9 | 4.4824 |
| 0.9524 | 10 | 4.1297 |
| 1.0952 | 11 | 3.6362 |
| 1.1905 | 12 | 2.9783 |
| 1.2857 | 13 | 2.6855 |
| 1.3810 | 14 | 2.1482 |
| 1.4762 | 15 | 1.9731 |
| 1.5714 | 16 | 1.6655 |
| 1.6667 | 17 | 1.5604 |
| 1.7619 | 18 | 1.3974 |
| 1.8571 | 19 | 1.2828 |
| 1.9524 | 20 | 1.3931 |
| 2.0952 | 21 | 1.0056 |
| 2.1905 | 22 | 0.8308 |
| 2.2857 | 23 | 0.7171 |
| 2.3810 | 24 | 0.6162 |
| 2.4762 | 25 | 0.6624 |
| 2.5714 | 26 | 0.5194 |
| 2.6667 | 27 | 0.5322 |
| 2.7619 | 28 | 0.457 |
| 2.8571 | 29 | 0.5596 |
| 2.9524 | 30 | 0.5194 |
| 3.0952 | 31 | 0.3777 |
| 3.1905 | 32 | 0.324 |
| 3.2857 | 33 | 0.2961 |
| 3.3810 | 34 | 0.2515 |
| 3.4762 | 35 | 0.2501 |
| 3.5714 | 36 | 0.2552 |
| 3.6667 | 37 | 0.1956 |
| 3.7619 | 38 | 0.1688 |
| 3.8571 | 39 | 0.207 |
| 3.9524 | 40 | 0.2219 |
| 4.0952 | 41 | 0.1458 |
| 4.1905 | 42 | 0.1345 |
| 4.2857 | 43 | 0.1421 |
| 4.3810 | 44 | 0.1228 |
| 4.4762 | 45 | 0.1158 |
| 4.5714 | 46 | 0.1105 |
| 4.6667 | 47 | 0.0788 |
| 4.7619 | 48 | 0.079 |
| 4.8571 | 49 | 0.111 |
| 4.9524 | 50 | 0.1202 |
| 5.0952 | 51 | 0.0685 |
| 5.1905 | 52 | 0.0834 |
| 5.2857 | 53 | 0.0711 |
| 5.3810 | 54 | 0.0694 |
| 5.4762 | 55 | 0.0627 |
| 5.5714 | 56 | 0.0655 |
| 5.6667 | 57 | 0.0576 |
| 5.7619 | 58 | 0.0467 |
| 5.8571 | 59 | 0.0582 |
| 5.9524 | 60 | 0.07 |
| 6.0952 | 61 | 0.0399 |
| 6.1905 | 62 | 0.0498 |
| 6.2857 | 63 | 0.0509 |
| 6.3810 | 64 | 0.0495 |
| 6.4762 | 65 | 0.0399 |
| 6.5714 | 66 | 0.0305 |
| 6.6667 | 67 | 0.0202 |
| 6.7619 | 68 | 0.0205 |
| 6.8571 | 69 | 0.0321 |
| 6.9524 | 70 | 0.048 |
| 7.0952 | 71 | 0.0231 |
| 7.1905 | 72 | 0.0388 |
| 7.2857 | 73 | 0.0241 |
| 7.3810 | 74 | 0.0227 |
| 7.4762 | 75 | 0.0241 |
| 7.5714 | 76 | 0.0252 |
| 7.6667 | 77 | 0.0202 |
| 7.7619 | 78 | 0.0171 |
| 7.8571 | 79 | 0.0277 |
| 7.9524 | 80 | 0.0352 |
| 8.0952 | 81 | 0.016 |
| 8.1905 | 82 | 0.0186 |
| 8.2857 | 83 | 0.0228 |
| 8.3810 | 84 | 0.0173 |
| 8.4762 | 85 | 0.0134 |
| 8.5714 | 86 | 0.0138 |
| 8.6667 | 87 | 0.0126 |
| 8.7619 | 88 | 0.0108 |
| 8.8571 | 89 | 0.0156 |
| 8.9524 | 90 | 0.0235 |
| 9.0952 | 91 | 0.0117 |
| 9.1905 | 92 | 0.0155 |
| 9.2857 | 93 | 0.0135 |
| 9.3810 | 94 | 0.0162 |
| 9.4762 | 95 | 0.0121 |
| 9.5714 | 96 | 0.0125 |
| 9.6667 | 97 | 0.0113 |
| 9.7619 | 98 | 0.0085 |
| 9.8571 | 99 | 0.0164 |
| 9.9524 | 100 | 0.0206 |
### Framework Versions
- Python: 3.10.13
- Sentence Transformers: 3.2.1
- Transformers: 4.44.2
- PyTorch: 2.4.0+cu121
- Accelerate: 1.1.1
- Datasets: 2.21.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on Alibaba-NLP/gte-multilingual-mlm-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-multilingual-mlm-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-mlm-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Alibaba-NLP/gte-multilingual-mlm-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-mlm-base) <!-- at revision b747c5e8eb09e48c24eb3d4e48f80a79a18889ff -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("seongil-dn/gte-gold-bs64")
# Run inference
sentences = [
'인간은 언제 달에 착륙했는가?',
'아폴로 11호(Apollo 11)는 처음으로 달에 착륙한 유인 우주선이다. 아폴로 계획의 다섯 번째 유인우주비행인 동시에 세 번째 유인 달 탐사이기도 했다. 1969년 7월 16일에 발사되었으며 선장 닐 암스트롱, 사령선 조종사 마이클 콜린스, 달 착륙선 조종사 버즈 올드린이 탔다. 7월 20일 암스트롱과 올드린은 달에 발을 딛은 최초의 인류가 되었다. 당시 콜린스는 달 궤도를 돌고 있었다.',
'사일런스는 닥터를 죽이기 위한 계획의 일환으로 우주복이 필요했으며, 전 인류에 걸쳐 \'암시 능력\'을 이용해 인류가 달에 가기 위한 연구를 하게 만들고 그 결과 인간이 만들어낸 우주복을 훔쳐 각종 최신 과학기술력을 탑재하여 개조한다. 하지만 사일런스가 "인간은 우릴 보고있을 때만 죽일 수 있다." 라고 말한 장면을 닥터가 아폴로 우주선의 송신 장치에 붙여놓아 아폴로 우주선이 달 착륙할때 TV 화면을 보고있던 전 세계 사람들에게 \'사일런스를 죽여라\'라는 암시가 걸리고 그 결과 사일런스는 1969년를 기점으로 더이상 인류에게 암시를 하지 못하게 되었다.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 5,376 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 14.66 tokens</li><li>max: 83 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 151.18 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 169.71 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>로마의 면적은 서울시의 2배인가요?</code> | <code>로마()는 이탈리아의 수도이자 라치오주의 주도로, 테베레 강 연안에 있다. 로마시의 행정구역 면적은 1,285km로 서울시의 2배정도이고, 2014년 인구는 290여만명이다. 로마시 권역의 인구는 430여만명이다. 로마 대도시현의 인구는 400만이 넘지만 밀라노나 나폴리 대도시현에 비해 면적이 3~4배 넓은 편이고 되려 로마시의 면적과 밀라노와 나폴리의 대도시현의 면적이 비슷하므로 세 도시 모두 300만 정도로 비슷한 규모의 도시라 볼 수 있다.</code> | <code>도봉구는 서울시청으로부터 약12km 동북부에 구의 중심인 방학동이 위치하며, 구 전체면적은 20.84km로 서울특별시 면적의 3.4%를 차지하고 있다. 도봉구 면적 중에서 가장 많이 차지하는 부분은 북한산국립공원을 비롯한 공원으로, 구면적의 48.2%인 10.05km에 달하고 있다. 서울시의 최북단에 위치한 도봉구는 동쪽으로 노원구 상계동과, 서쪽은 강북구 수유동·우이동과, 남쪽은 노원구 월계동 및 강북구 번동과 북쪽은 의정부시 호원동 등과 접하고 있는 서울 동북부의 관문 지역이다.</code> |
| <code>로마의 면적은 서울시의 2배인가요?</code> | <code>로마()는 이탈리아의 수도이자 라치오주의 주도로, 테베레 강 연안에 있다. 로마시의 행정구역 면적은 1,285km로 서울시의 2배정도이고, 2014년 인구는 290여만명이다. 로마시 권역의 인구는 430여만명이다. 로마 대도시현의 인구는 400만이 넘지만 밀라노나 나폴리 대도시현에 비해 면적이 3~4배 넓은 편이고 되려 로마시의 면적과 밀라노와 나폴리의 대도시현의 면적이 비슷하므로 세 도시 모두 300만 정도로 비슷한 규모의 도시라 볼 수 있다.</code> | <code>신안군(新安郡)은 유인도 72개와 무인도 932개로 이뤄져 있다. 섬의 면적만 (655km)에 달하고, 바다와 육지 넓이를 더한 신안군의 면적은 서울시의 22배나 된다. 이런 넓은 지역을 36곳의 치안센터와 파출소에 근무하는 목포경찰서 소속 경찰관 100여명이 관리해, 이전부터 치안 공백을 우려하는 주민들의 지적이 많았다. 신안군 한 사회단체 관계자는 "신안에 경찰서가 있었다면 염전 종사자 관리감독이 이처럼 방관 상태까지 이르지 않았을 것이다"고 주장했다.</code> |
| <code>로마의 면적은 서울시의 2배인가요?</code> | <code>로마()는 이탈리아의 수도이자 라치오주의 주도로, 테베레 강 연안에 있다. 로마시의 행정구역 면적은 1,285km로 서울시의 2배정도이고, 2014년 인구는 290여만명이다. 로마시 권역의 인구는 430여만명이다. 로마 대도시현의 인구는 400만이 넘지만 밀라노나 나폴리 대도시현에 비해 면적이 3~4배 넓은 편이고 되려 로마시의 면적과 밀라노와 나폴리의 대도시현의 면적이 비슷하므로 세 도시 모두 300만 정도로 비슷한 규모의 도시라 볼 수 있다.</code> | <code>로마는 2015년 1월 1일부로 로마 수도 광역시의 행정 중심지가 되었다. 이 로마 수도 광역시는 로마 광역권에 북쪽으로 치비타베키아까지 뻗어나갔던 구 로마현을 대체했다. 로마 수도 광역시의 면적은 총 5,353제곱미터로 이탈리아에서 가장 크며, 리구리아주에 맞먹는다. 이와 더불어 로마는 라치오주의 주도이기도 하다.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 64
- `gradient_accumulation_steps`: 8
- `learning_rate`: 0.0001
- `adam_epsilon`: 1e-07
- `num_train_epochs`: 10
- `warmup_ratio`: 0.1
- `fp16`: True
- `dataloader_drop_last`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 8
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 0.0001
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-07
- `max_grad_norm`: 1.0
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.0952 | 1 | 5.6584 |
| 0.1905 | 2 | 5.6663 |
| 0.2857 | 3 | 5.2883 |
| 0.3810 | 4 | 5.5523 |
| 0.4762 | 5 | 5.5037 |
| 0.5714 | 6 | 5.1176 |
| 0.6667 | 7 | 4.9949 |
| 0.7619 | 8 | 5.0314 |
| 0.8571 | 9 | 4.4824 |
| 0.9524 | 10 | 4.1297 |
| 1.0952 | 11 | 3.6362 |
| 1.1905 | 12 | 2.9783 |
| 1.2857 | 13 | 2.6855 |
| 1.3810 | 14 | 2.1482 |
| 1.4762 | 15 | 1.9731 |
| 1.5714 | 16 | 1.6655 |
| 1.6667 | 17 | 1.5604 |
| 1.7619 | 18 | 1.3974 |
| 1.8571 | 19 | 1.2828 |
| 1.9524 | 20 | 1.3931 |
| 2.0952 | 21 | 1.0056 |
| 2.1905 | 22 | 0.8308 |
| 2.2857 | 23 | 0.7171 |
| 2.3810 | 24 | 0.6162 |
| 2.4762 | 25 | 0.6624 |
| 2.5714 | 26 | 0.5194 |
| 2.6667 | 27 | 0.5322 |
| 2.7619 | 28 | 0.457 |
| 2.8571 | 29 | 0.5596 |
| 2.9524 | 30 | 0.5194 |
| 3.0952 | 31 | 0.3777 |
| 3.1905 | 32 | 0.324 |
| 3.2857 | 33 | 0.2961 |
| 3.3810 | 34 | 0.2515 |
| 3.4762 | 35 | 0.2501 |
| 3.5714 | 36 | 0.2552 |
| 3.6667 | 37 | 0.1956 |
| 3.7619 | 38 | 0.1688 |
| 3.8571 | 39 | 0.207 |
| 3.9524 | 40 | 0.2219 |
| 4.0952 | 41 | 0.1458 |
| 4.1905 | 42 | 0.1345 |
| 4.2857 | 43 | 0.1421 |
| 4.3810 | 44 | 0.1228 |
| 4.4762 | 45 | 0.1158 |
| 4.5714 | 46 | 0.1105 |
| 4.6667 | 47 | 0.0788 |
| 4.7619 | 48 | 0.079 |
| 4.8571 | 49 | 0.111 |
| 4.9524 | 50 | 0.1202 |
| 5.0952 | 51 | 0.0685 |
| 5.1905 | 52 | 0.0834 |
| 5.2857 | 53 | 0.0711 |
| 5.3810 | 54 | 0.0694 |
| 5.4762 | 55 | 0.0627 |
| 5.5714 | 56 | 0.0655 |
| 5.6667 | 57 | 0.0576 |
| 5.7619 | 58 | 0.0467 |
| 5.8571 | 59 | 0.0582 |
| 5.9524 | 60 | 0.07 |
| 6.0952 | 61 | 0.0399 |
| 6.1905 | 62 | 0.0498 |
| 6.2857 | 63 | 0.0509 |
| 6.3810 | 64 | 0.0495 |
| 6.4762 | 65 | 0.0399 |
| 6.5714 | 66 | 0.0305 |
| 6.6667 | 67 | 0.0202 |
| 6.7619 | 68 | 0.0205 |
| 6.8571 | 69 | 0.0321 |
| 6.9524 | 70 | 0.048 |
| 7.0952 | 71 | 0.0231 |
| 7.1905 | 72 | 0.0388 |
| 7.2857 | 73 | 0.0241 |
| 7.3810 | 74 | 0.0227 |
| 7.4762 | 75 | 0.0241 |
| 7.5714 | 76 | 0.0252 |
| 7.6667 | 77 | 0.0202 |
| 7.7619 | 78 | 0.0171 |
| 7.8571 | 79 | 0.0277 |
| 7.9524 | 80 | 0.0352 |
| 8.0952 | 81 | 0.016 |
| 8.1905 | 82 | 0.0186 |
| 8.2857 | 83 | 0.0228 |
| 8.3810 | 84 | 0.0173 |
| 8.4762 | 85 | 0.0134 |
| 8.5714 | 86 | 0.0138 |
| 8.6667 | 87 | 0.0126 |
| 8.7619 | 88 | 0.0108 |
| 8.8571 | 89 | 0.0156 |
| 8.9524 | 90 | 0.0235 |
| 9.0952 | 91 | 0.0117 |
| 9.1905 | 92 | 0.0155 |
| 9.2857 | 93 | 0.0135 |
| 9.3810 | 94 | 0.0162 |
| 9.4762 | 95 | 0.0121 |
| 9.5714 | 96 | 0.0125 |
| 9.6667 | 97 | 0.0113 |
| 9.7619 | 98 | 0.0085 |
| 9.8571 | 99 | 0.0164 |
| 9.9524 | 100 | 0.0206 |
### Framework Versions
- Python: 3.10.13
- Sentence Transformers: 3.2.1
- Transformers: 4.44.2
- PyTorch: 2.4.0+cu121
- Accelerate: 1.1.1
- Datasets: 2.21.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "Alibaba-NLP/gte-multilingual-mlm-base", "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:5376", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "세종특별자치시의 행정구역은 어떻게 되나요?", "sentences": ["잉글랜드 주민들이 앵글로색슨족인 반면 스코틀랜드 주민들은 아일랜드나 프랑스 북부 브르타뉴, 스페인 북부 갈리시아와 같은 켈트족이다. 따라서 잉글랜드와는 문화가 다르며, 국민성도 다르다. 언어는 원래 영어와 다른 스코틀랜드 게일어, 그리고 스코트어를 사용하였으나 영어의 영향 및 스코틀랜드문화에 대한 잉글랜드의 탄압으로 사용인구가 줄어들었다. 스코트어는 영어와 계통이 같은 게르만어파로서 중세영어의 모습을 많이 보존하고 있다. 스코틀랜드 사람들의 영어도 스코트어가 혼합되어 있는데, 발음도 잉글랜드 발음과 많이 다르다. 예를 들어 잉글랜드 사람들은 모음 다음에 나오는 알(R)을 발음하지 않는 반면 스코틀랜드 사람들은 미국이나 캐나다 사람들처럼 알(R)을 분명히 발음한다. 스코틀랜드 게일어는 게르만어파와 전혀 다른 켈트어파에 속하는 켈트족의 순수한 언어로서 현재 사용인구가 1000여명밖에 남아있지 않아 사멸위기에 놓여있다. 2005년에 공용어로 지정되었다.2005년 총선거 이전까지 스코틀랜드에는 영국 의회의 하원 의석 가운데 72석이 할당되어 있었다. 이것은 연합왕국을 구성하는 다른 지역에 비해 의석이 과다 할당된 것이었다. 1998년 스코틀랜드법의 81조를 통해 스코틀랜드의 선거구를 잉글랜드와 균등하게 조정하였다. 그 결과 스코틀랜드에 할당된 의석을 57석으로 줄이는 스코틀랜드 선거구 조정 위원회의 권고가 2005년 총선거에서부터 적용되었다. 이러한 의석수 삭감을 위해 영국 의회에서는 1998년 스코틀랜드법에 대한 수정안이 2004년 스코틀랜드 선거구법으로 통과되었다. 과도할당된 의석을 바탕으로 스코틀랜드는 영국 하원 내에서의 큰 발언권을 갖고 있었으나, 스코틀랜드 의회가 부활되고 난 이후에는 그러한 발언권이 굳이 필요하지 않게 되었다는 점도 관련된다.", "새만금지역은 행정구역상 2시1군 19개 읍·면·동으로 군산시, 김제시, 부안군에 걸쳐 총 401km에 이르며 뉴욕 맨해튼의 5배, 서울특별시 면적의 3분의2, 행정복합도시로 새롭게 탄생되는 세종특별자치시 면적의 5.7배에 달한다.", "세종특별자치시의 행정 구역은 1읍, 9면, 9행정동(14개 법정동)으로 구성되어 있으며, 2018년 6월 말 주민등록 기준으로 인구는 30만0332 명, 11만7734 가구이다."]}, {"source_sentence": "니체는 어느 나라 사람인가?", "sentences": ["성리학(性理學) 12세기에 남송의 주희(朱熹)가 집대성한 유교의 주류 학파이다. 성리학의 어원은 주희가 주창한 성즉리(性卽理)를 축약한 명칭이다.", "니체는 1844년 10월 15일 예전의 프로이센 (독일)의 작센 지방의 작은 마을인 뢰켄(Röcken)에서 루터교 목사의 아들로 태어났다. 그의 이름은 프러시아의 왕인 프리드리히 빌헬름 4세에게서 빌려온 것으로, 빌헬름 4세는 니체가 태어나던 날에 나이가 49세를 넘어있었다(니체는 훗날 그의 이름에서 가운데에 있던 \"빌헬름\"을 빼 버렸다.) 니체의 아버지인 카를 빌헬름 루트비히 니체(1813-1849)는 루터교회 목사이자 전직 교사이었고, 프란치스카 욀러(1826~1897)와 1843년에 결혼하였다. 그의 여동생인 엘리자베스 니체는 1846년에 태어났고, 뒤를 이어 남동생인 루드비히 요셉이 1848년에 태어났다. 니체의 아버지는 뇌 질환으로 1849년에 세상을 떠났다. 그의 어린 남동생은 1850년에 죽었다. 그 후 가족은 나움부르크로 이사를 갔고, 그곳에서 니체의 할머니와 어머니 프란치스카, 아버지의 결혼하지 않은 두 자매, 두 하녀들과 함께 살며 어린시절을 보냈다. 니체의 할머니가 1856년에 세상을 하직하자, 가족은 그들의 집으로 이사했다.", "19세기 철학자 프리드리히 니체는 '신은 죽었다.'라는 발언으로 잘 알려진 명사이다. 그러나 이 발언은 니체가 직접 한 발언이 아니라 그의 작품 속 인물들 사이의 대화에서 나온 문장이다. 니체는 기독교적 유신론이 한 신앙 체제로서 서구 세계의 도덕적 근원이였으나 현대 사상의 결과 즉 허무주의의 부상과 기독교적 가치의 하락으로 붕괴되고 거부되었다고 논증하였다. 니체는 굳은 무신론자로서 허무주의와 인본주의의 부작용을 걱정하기도 하였다. 따라서 그는 새로운 사회를 요구하였는데, 그것은 바로 초인 중심의 사회였다."]}, {"source_sentence": "돼지의 번식은 태어난 지 몇개월 정도 되면 짝짓기를 할 수 있나요?", "sentences": ["워싱턴 기념탑(555피트, 약 169.3m)의 높이 이상으로 건축물을 건설하지 못하도록 하여, 뉴욕와 달리 초고층 건물이 드물다. 백악관, 펜타곤, 연방 의사당 등 미국의 주요 정부 기관이 있다. 정식 연방주가 아니므로 상원 의석은 없으나, 표결권이 없는 미국 하원 파견대표 1명 (2019년 현재 민주당의 Eleanor Holmes Norton의원)과 대통령선거인단 3표를 확보하고 있다.", "두더지는 태어난 지 2-3년이면 새끼를 낳을 수 있다. 3-4월에 짝짓기를 하고 1년에 1회, 2-7마리의 새끼를 낳는다. 성장이 매우 빨라서 6개월이면 앞발을 제외하고 어미와 비슷한 크기로 자란다. 수명은 5년 정도이다.", "돼지는 번식이 매우 빨라서 태어난 지 8개월 정도가 되면 짝짓기를 할 수 있다. 임신기간은 114일이고 보통 한배에 8-12마리를 낳는다. 갓 태어난 새끼돼지는 약 1kg가량이고, 1주일이 지나면 두 배로 늘어난다. 다 자란 수퇘지는 230 kg 이상 되며, 암퇘지도 200kg이 넘는다. 돼지가 완전히 자라기 위해서는 1년 반에서 2년이 걸리며 수명은 9-15년이다. 돼지는 오랜 시간과 노력을 거친 품종 개량으로 오늘날 수많은 종에 이르렀다.이슬람교와 유대교의 발원지인 서아시아가 고온건조한 사막기후여서 잡식동물인 데다가 자주 씻는 돼지는 키우기 부적합한 동물이다. 게다가 비활동적인 성격때문에 불결한 짐승으로 여겨졌다. 지금도 이슬람교도와 유대인들은 돼지고기를 먹지 않는 전통이 있다."]}, {"source_sentence": "인간은 언제 달에 착륙했는가?", "sentences": ["세종특별자치시의 행정 구역은 1읍, 9면, 9행정동(14개 법정동)으로 구성되어 있으며, 2018년 6월 말 주민등록 기준으로 인구는 30만0332 명, 11만7734 가구이다.", "아폴로 11호(Apollo 11)는 처음으로 달에 착륙한 유인 우주선이다. 아폴로 계획의 다섯 번째 유인우주비행인 동시에 세 번째 유인 달 탐사이기도 했다. 1969년 7월 16일에 발사되었으며 선장 닐 암스트롱, 사령선 조종사 마이클 콜린스, 달 착륙선 조종사 버즈 올드린이 탔다. 7월 20일 암스트롱과 올드린은 달에 발을 딛은 최초의 인류가 되었다. 당시 콜린스는 달 궤도를 돌고 있었다.", "1969년 7월 16일 13시 32분 UTC에 플로리다 주 케네디 우주 센터에서 새턴 5호 로켓으로 발사되었다. NASA의 5번째 아폴로 프로그램 유인우주선 임무였다. 7월 20일 20시 17분 UTC에 달착륙선이 달의 표면에 착륙했다. 선장은 닐 암스트롱 중위, 조종사는 버즈 올드린이었다. 달에 착륙하고 6시간이 지나서 7월 21일 02시 56분 15초 UTC에 암스트롱이 달에 내렸다. 버즈 올드린은 20분 후에 내려왔다. 2시간 30분을 달에서 걸어다니면서 21.5 kg의 달 모래 등을 채집했다. 달 궤도를 돌고 있는 사령선에는 마이클 콜린스 (우주비행사)가 혼자 남아있었다. 달에 착륙하고 21.5 시간이 지나서 달착륙선이 이륙해 사령선과 도킹했다.콜린스는 원래 아폴로 8호 때 사령선 조종사를 맡기로 되어 있었으나, 등 부상으로 인해 탈락하고 짐 러벨이 대신 임무를 맡았다. 또한 아폴로 11호 달 착륙선 조종사를 맡기로 되어 있었으나, 사령선 조종사는 우주 비행 경험이 있는 사람만 할 수 있기 때문에 제미니 계획에서 우주비행을 한 경험이 있던 콜린스 비행사가 사령선 조종사로 일종의 '진급'을 하게 되었다.달 착륙선은 미국의 국장이자 아폴로 11호의 기장 마스코트인 흰머리수리에서 착안, 이글(\"Eagle\")로 이름지어졌다. 사령선의 이름은 아폴로 11호 승무원들이 자신들의 달 탐험을 미 대륙을 발견했던 콜롬버스의 이름을 따서 콜럼비아(\"Columbia\")로 이름이 지어졌다. 미국 항공우주국 내부 문서에는 달 착륙선과 사령선을 각각 빙수(\"Snowcone\")와 건초가리(\"Haystack\")로 부를 계획이 잡혀 있었으나, 이들 이름은 언론에 계획이 공표되기 전에 조용히 바뀌었다."]}, {"source_sentence": "인간은 언제 달에 착륙했는가?", "sentences": ["아폴로 11호(Apollo 11)는 처음으로 달에 착륙한 유인 우주선이다. 아폴로 계획의 다섯 번째 유인우주비행인 동시에 세 번째 유인 달 탐사이기도 했다. 1969년 7월 16일에 발사되었으며 선장 닐 암스트롱, 사령선 조종사 마이클 콜린스, 달 착륙선 조종사 버즈 올드린이 탔다. 7월 20일 암스트롱과 올드린은 달에 발을 딛은 최초의 인류가 되었다. 당시 콜린스는 달 궤도를 돌고 있었다.", "가톨릭 신자들은 모든 주일 (그리스도교)과 더불어 의무 대축일에 해당하는 천주의 성모 마리아 대축일(1월 1일), 주님 부활 대축일, 성모 승천 대축일(8월 15일), 주님 성탄 대축일(12월 25일)에 미사에 참례해야 하는 것이 기본적인 의무이다.", "사일런스는 닥터를 죽이기 위한 계획의 일환으로 우주복이 필요했으며, 전 인류에 걸쳐 '암시 능력'을 이용해 인류가 달에 가기 위한 연구를 하게 만들고 그 결과 인간이 만들어낸 우주복을 훔쳐 각종 최신 과학기술력을 탑재하여 개조한다. 하지만 사일런스가 \"인간은 우릴 보고있을 때만 죽일 수 있다.\" 라고 말한 장면을 닥터가 아폴로 우주선의 송신 장치에 붙여놓아 아폴로 우주선이 달 착륙할때 TV 화면을 보고있던 전 세계 사람들에게 '사일런스를 죽여라'라는 암시가 걸리고 그 결과 사일런스는 1969년를 기점으로 더이상 인류에게 암시를 하지 못하게 되었다."]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,297 |
NlpHUST/t5-vi-en-small
|
NlpHUST
|
text2text-generation
|
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2021-06-23T03:45:23+00:00
| 160 | 1 |
---
{}
---
---
language:
- vi
tags:
- t5
- seq2seq
# Machine translation for vietnamese
## Model Description
T5-vi-en-small is a transformer model for vietnamese machine translation designed using T5 architecture.
## Training data
T5-vi-en-small was trained on 4M sentence pairs (english,vietnamese)
### How to use
```py
from transformers import T5ForConditionalGeneration, T5Tokenizer
import torch
if torch.cuda.is_available():
device = torch.device("cuda")
print('There are %d GPU(s) available.' % torch.cuda.device_count())
print('We will use the GPU:', torch.cuda.get_device_name(0))
else:
print('No GPU available, using the CPU instead.')
device = torch.device("cpu")
model = T5ForConditionalGeneration.from_pretrained("NlpHUST/t5-vi-en-small")
tokenizer = T5Tokenizer.from_pretrained("NlpHUST/t5-vi-en-small")
model.to(device)
src = "Indonesia phỏng đoán nguyên nhân tàu ngầm chở 53 người mất tích bí ẩn"
tokenized_text = tokenizer.encode(src, return_tensors="pt").to(device)
model.eval()
summary_ids = model.generate(
tokenized_text,
max_length=256,
num_beams=5,
repetition_penalty=2.5,
length_penalty=1.0,
early_stopping=True
)
output = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(output)
Indonesia anticipates the cause of the submarine transporting 53 mysterious missing persons
```
| null |
Non_BioNLP
|
---
language:
- vi
tags:
- t5
- seq2seq
# Machine translation for vietnamese
## Model Description
T5-vi-en-small is a transformer model for vietnamese machine translation designed using T5 architecture.
## Training data
T5-vi-en-small was trained on 4M sentence pairs (english,vietnamese)
### How to use
```py
from transformers import T5ForConditionalGeneration, T5Tokenizer
import torch
if torch.cuda.is_available():
device = torch.device("cuda")
print('There are %d GPU(s) available.' % torch.cuda.device_count())
print('We will use the GPU:', torch.cuda.get_device_name(0))
else:
print('No GPU available, using the CPU instead.')
device = torch.device("cpu")
model = T5ForConditionalGeneration.from_pretrained("NlpHUST/t5-vi-en-small")
tokenizer = T5Tokenizer.from_pretrained("NlpHUST/t5-vi-en-small")
model.to(device)
src = "Indonesia phỏng đoán nguyên nhân tàu ngầm chở 53 người mất tích bí ẩn"
tokenized_text = tokenizer.encode(src, return_tensors="pt").to(device)
model.eval()
summary_ids = model.generate(
tokenized_text,
max_length=256,
num_beams=5,
repetition_penalty=2.5,
length_penalty=1.0,
early_stopping=True
)
output = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(output)
Indonesia anticipates the cause of the submarine transporting 53 mysterious missing persons
```
|
{}
|
task
|
[
"TRANSLATION"
] | 42,298 |
dtorber/BioNLP-2024-dtorber-baseline_v1-eLife
|
dtorber
|
summarization
|
[
"transformers",
"safetensors",
"led",
"text2text-generation",
"summarization",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-05-06T20:17:27Z |
2024-05-07T02:32:33+00:00
| 14 | 0 |
---
tags:
- summarization
- generated_from_trainer
model-index:
- name: BioNLP-2024-dtorber-baseline_v1-eLife
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BioNLP-2024-dtorber-baseline_v1-eLife
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.3739167643078955e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
| null |
BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BioNLP-2024-dtorber-baseline_v1-eLife
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.3739167643078955e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
|
{"tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "BioNLP-2024-dtorber-baseline_v1-eLife", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 42,299 |
meetween/Llama-speechlmm-1.0-xl
|
meetween
| null |
[
"transformers",
"safetensors",
"llava",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | 2025-03-01T11:14:14Z |
2025-03-12T09:32:25+00:00
| 35 | 0 |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: Llama-speechlmm-1.0-xl
results: []
---
## Model information
The SpeechLMM 1.0 collection of multimodal and multilingual large language models is a collection of instruction-tuned generative models in 4 different sizes: S (2B), M (4B), L (9B) and XL (71B), supporting text, audio and video as input and only text as output. The SpeechLMM 1.0 models are optimized for various X-to-text generation tasks, namely:
- Machine Translation
- Automatic Speech Recognition
- Speech Translation
- Speech Summarization
- Spoken Question Answering
- Spoken Language Understanding (beta)
- Visual Speech Recognition (beta)
**Model Developer:** Meetween consortium
**Supported Languages:** English, French, Italian, German, and Spanish are officially supported (for a subset of the supported tasks). The Llama 3.X backbone and the SeamlessM4T v2 audio encoder have been trained on a broader collection of languages than these 5 supported languages, so the model might exhibit good performance on other languages too.
**Model Release Date:** Feb 28, 2025
**License:** see [LICENSE](LICENSE)
### Model Architecture
SpeechLMM 1.0 an auto-regressive multimodal language model based on a Llama 3.X backbone (X varies with the model size), a speech-specific stack consisting of a pre-trained audio encoder ([SeamlessM4T v2](https://ai.meta.com/research/publications/seamless-multilingual-expressive-and-streaming-speech-translation/)) and an audio adapter, and a video-specific stack consisting of a pre-trained video encoder ([Auto-AVSR](https://ieeexplore.ieee.org/document/10096889)) and a video adapter.
<!-- TODO: add the image of the model architecture here -->
| Model | Params | Input modalities | Output modalities | Context Length |
|:---------------- |:----------- |:------------------------------------------ |:----------------- |:-------------- |
| SpeechLMM 1.0 S | 2B (2.17B) | Multilingual text and audio, English video | Multilingual Text | 128k |
| SpeechLMM 1.0 M | 4B (4.15B) | Multilingual text and audio, English video | Multilingual Text | 128k |
| SpeechLMM 1.0 L | 9B (8.98B) | Multilingual text and audio, English video | Multilingual Text | 128k |
| SpeechLMM 1.0 XL (beta) | 71B (71.5B) | Multilingual text and audio, English video | Multilingual Text | 128k |
#### Audio and video encoders
For all the 4 sizes of SpeechLMM 1.0, the audio encoder is **SeamlessM4T v2 Large** (`facebook/seamless-m4t-v2-large`) and the video encoder is **Auto-AVSR** (`vsr_trlrs3vox2_base`).
#### Audio and video adapters
For all the 4 sizes of SpeechLMM 1.0, the audio and video adapters are:
| Modality | Architecture | Number of layers | Compression factor |
| :------- | :----------- | :--------------- | :----------------- |
| Audio | MLP | 4 | 1 |
| Video | Window-level Q-former <br> (4 queries) | 4 | 4 |
#### LLM backbone
| Model | Backbone |
|:---------------- |:---------------------- |
| SpeechLMM 1.0 S | Llama 3.2 1B Instruct |
| SpeechLMM 1.0 M | Llama 3.2 3B Instruct |
| SpeechLMM 1.0 L | Llama 3.1 8B Instruct |
| SpeechLMM 1.0 XL (beta) | Llama 3.3 70B Instruct |
## How to use
Currently, this model can only be used via our [`speechlmm`](https://github.com/meetween/speechlmm) codebase. Refer to the instructions there for more details.
Important: before you can use this model, you must download the SeamlessM4T v2 speech encoder and the Auto-AVSR video encoder by following the instructions provided in the README of the above repo. Please note that by doing so, you agree with their respective license terms.
## Training Data
### Monolingual
| TASK | Task name | Dataset | Language | License |
| -------- | ---------------------------- | ------------------ | -------- | ------------------------------------------ |
| **ASR** | Automatic Speech Recognition | **LibriHeavy** | en | CC-BY-4.0 |
| | | **LibriTTS** | en | CC BY 4.0 |
| | | **AMI** | en | CC-BY-4.0 |
| | | **ICSI** | en | CC-BY-4.0 |
| **VSR** | Visual Speech Recognition | **LRS2-BBC** | en | Custom |
| **SSUM** | Speech Summarization | **AMI** | en | CC-BY-4.0 |
| | | **ICSI** | en | CC-BY-4.0 |
| **SQA** | Spoken Question Answering | **Spoken SQUAD** | en | CC-BY-SA-4.0 |
| **SLU** | Spoken Language Understanding| **SLURP** | en | CC BY 4.0 (text) <br> CC BY-NC 4.0 (audio) |
### Multilingual
| TASK | Task name | Dataset | Language | License |
| ---------------- | ----------------------------- | ------------------------------------ | ------------------------------------------- | ------------------------------------------ |
| **ASR** | Automatic Speech Recognition | **CoVoST2** | en, fr, it, de, es | CC0 |
| | | **CommonVoice** | en, fr, it, de, es | Apache-2.0 |
| **ST** | Speech-to-text Translation | **CoVoST2** | en → de, {fr, it, de, es} → en | CC0 |
| | | **EuroParl-ST** | {en, fr, it, de, es} → {en, fr, it, de, es} | CC-BY-NC-4.0 |
| **MT** | Machine Translation | **EuroParl-ST** | {en, fr, it, de, es} → {en, fr, it, de, es} | CC-BY-NC-4.0 |
| **TextInstruct** | Text Instruction Following | **Everything_Instruct_Multilingual** | en, fr, it, de, es, ru, zh, ko, ur, la, ar,<br>hi, ja, nl, pt | Apache-2.0 |
| **SLU** | Spoken Language Understanding | **Speech-Massive** | fr, de | CC-BY-NC-SA-4.0 |
## Evaluation Results
Results for the XL model are coming soon...
## Framework versions
- Transformers 4.45.0
- Pytorch 2.3.1+cu124.post2
- Datasets 3.2.0
- Tokenizers 0.20.0
| null |
Non_BioNLP
|
## Model information
The SpeechLMM 1.0 collection of multimodal and multilingual large language models is a collection of instruction-tuned generative models in 4 different sizes: S (2B), M (4B), L (9B) and XL (71B), supporting text, audio and video as input and only text as output. The SpeechLMM 1.0 models are optimized for various X-to-text generation tasks, namely:
- Machine Translation
- Automatic Speech Recognition
- Speech Translation
- Speech Summarization
- Spoken Question Answering
- Spoken Language Understanding (beta)
- Visual Speech Recognition (beta)
**Model Developer:** Meetween consortium
**Supported Languages:** English, French, Italian, German, and Spanish are officially supported (for a subset of the supported tasks). The Llama 3.X backbone and the SeamlessM4T v2 audio encoder have been trained on a broader collection of languages than these 5 supported languages, so the model might exhibit good performance on other languages too.
**Model Release Date:** Feb 28, 2025
**License:** see [LICENSE](LICENSE)
### Model Architecture
SpeechLMM 1.0 an auto-regressive multimodal language model based on a Llama 3.X backbone (X varies with the model size), a speech-specific stack consisting of a pre-trained audio encoder ([SeamlessM4T v2](https://ai.meta.com/research/publications/seamless-multilingual-expressive-and-streaming-speech-translation/)) and an audio adapter, and a video-specific stack consisting of a pre-trained video encoder ([Auto-AVSR](https://ieeexplore.ieee.org/document/10096889)) and a video adapter.
<!-- TODO: add the image of the model architecture here -->
| Model | Params | Input modalities | Output modalities | Context Length |
|:---------------- |:----------- |:------------------------------------------ |:----------------- |:-------------- |
| SpeechLMM 1.0 S | 2B (2.17B) | Multilingual text and audio, English video | Multilingual Text | 128k |
| SpeechLMM 1.0 M | 4B (4.15B) | Multilingual text and audio, English video | Multilingual Text | 128k |
| SpeechLMM 1.0 L | 9B (8.98B) | Multilingual text and audio, English video | Multilingual Text | 128k |
| SpeechLMM 1.0 XL (beta) | 71B (71.5B) | Multilingual text and audio, English video | Multilingual Text | 128k |
#### Audio and video encoders
For all the 4 sizes of SpeechLMM 1.0, the audio encoder is **SeamlessM4T v2 Large** (`facebook/seamless-m4t-v2-large`) and the video encoder is **Auto-AVSR** (`vsr_trlrs3vox2_base`).
#### Audio and video adapters
For all the 4 sizes of SpeechLMM 1.0, the audio and video adapters are:
| Modality | Architecture | Number of layers | Compression factor |
| :------- | :----------- | :--------------- | :----------------- |
| Audio | MLP | 4 | 1 |
| Video | Window-level Q-former <br> (4 queries) | 4 | 4 |
#### LLM backbone
| Model | Backbone |
|:---------------- |:---------------------- |
| SpeechLMM 1.0 S | Llama 3.2 1B Instruct |
| SpeechLMM 1.0 M | Llama 3.2 3B Instruct |
| SpeechLMM 1.0 L | Llama 3.1 8B Instruct |
| SpeechLMM 1.0 XL (beta) | Llama 3.3 70B Instruct |
## How to use
Currently, this model can only be used via our [`speechlmm`](https://github.com/meetween/speechlmm) codebase. Refer to the instructions there for more details.
Important: before you can use this model, you must download the SeamlessM4T v2 speech encoder and the Auto-AVSR video encoder by following the instructions provided in the README of the above repo. Please note that by doing so, you agree with their respective license terms.
## Training Data
### Monolingual
| TASK | Task name | Dataset | Language | License |
| -------- | ---------------------------- | ------------------ | -------- | ------------------------------------------ |
| **ASR** | Automatic Speech Recognition | **LibriHeavy** | en | CC-BY-4.0 |
| | | **LibriTTS** | en | CC BY 4.0 |
| | | **AMI** | en | CC-BY-4.0 |
| | | **ICSI** | en | CC-BY-4.0 |
| **VSR** | Visual Speech Recognition | **LRS2-BBC** | en | Custom |
| **SSUM** | Speech Summarization | **AMI** | en | CC-BY-4.0 |
| | | **ICSI** | en | CC-BY-4.0 |
| **SQA** | Spoken Question Answering | **Spoken SQUAD** | en | CC-BY-SA-4.0 |
| **SLU** | Spoken Language Understanding| **SLURP** | en | CC BY 4.0 (text) <br> CC BY-NC 4.0 (audio) |
### Multilingual
| TASK | Task name | Dataset | Language | License |
| ---------------- | ----------------------------- | ------------------------------------ | ------------------------------------------- | ------------------------------------------ |
| **ASR** | Automatic Speech Recognition | **CoVoST2** | en, fr, it, de, es | CC0 |
| | | **CommonVoice** | en, fr, it, de, es | Apache-2.0 |
| **ST** | Speech-to-text Translation | **CoVoST2** | en → de, {fr, it, de, es} → en | CC0 |
| | | **EuroParl-ST** | {en, fr, it, de, es} → {en, fr, it, de, es} | CC-BY-NC-4.0 |
| **MT** | Machine Translation | **EuroParl-ST** | {en, fr, it, de, es} → {en, fr, it, de, es} | CC-BY-NC-4.0 |
| **TextInstruct** | Text Instruction Following | **Everything_Instruct_Multilingual** | en, fr, it, de, es, ru, zh, ko, ur, la, ar,<br>hi, ja, nl, pt | Apache-2.0 |
| **SLU** | Spoken Language Understanding | **Speech-Massive** | fr, de | CC-BY-NC-SA-4.0 |
## Evaluation Results
Results for the XL model are coming soon...
## Framework versions
- Transformers 4.45.0
- Pytorch 2.3.1+cu124.post2
- Datasets 3.2.0
- Tokenizers 0.20.0
|
{"library_name": "transformers", "tags": ["generated_from_trainer"], "model-index": [{"name": "Llama-speechlmm-1.0-xl", "results": []}]}
|
task
|
[
"QUESTION_ANSWERING",
"TRANSLATION",
"SUMMARIZATION"
] | 42,300 |
AlgorithmicResearchGroup/flan-t5-xxl-arxiv-cs-ml-closed-qa
|
AlgorithmicResearchGroup
|
summarization
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv",
"summarization",
"en",
"dataset:ArtifactAI/arxiv-cs-ml-instruct-tune-50k",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-06-26T14:17:24Z |
2023-06-27T04:40:26+00:00
| 21 | 0 |
---
datasets:
- ArtifactAI/arxiv-cs-ml-instruct-tune-50k
language:
- en
license: apache-2.0
pipeline_tag: summarization
tags:
- arxiv
widget:
- text: What is an LSTM?
example_title: Question Answering
---
# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Uses](#uses)
4. [Citation](#citation)
# TL;DR
This is a FLAN-T5-XXL model trained on [ArtifactAI/arxiv-cs-ml-instruct-50k](https://huggingface.co/datasets/ArtifactAI/arxiv-cs-ml-instruct-50k). This model is for research purposes only and ***should not be used in production settings***.
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Related Models:** [All FLAN-T5 Checkpoints](https://huggingface.co/models?search=flan-t5)
# Usage
Find below some example scripts on how to use the model in `transformers`:
## Using the Pytorch model
```python
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
# Load peft config for pre-trained checkpoint etc.
peft_model_id = "ArtifactAI/flant5-xxl-math-full-training-run-one"
config = PeftConfig.from_pretrained(peft_model_id)
# load base LLM model and tokenizer
model = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path, load_in_8bit=True, device_map={"":0})
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id, device_map={"":0})
model.eval()
input_ids = tokenizer("What is the peak phase of T-eV?", return_tensors="pt", truncation=True).input_ids.cuda()
# with torch.inference_mode():
outputs = model.generate(input_ids=input_ids, max_new_tokens=1000, do_sample=True, top_p=0.9)
print(f"summary: {tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0]}")
```
## Training Data
The model was trained on [ArtifactAI/arxiv-math-instruct-50k](https://huggingface.co/datasets/ArtifactAI/arxiv-cs-ml-instruct-50k), a dataset of question/answer pairs. Questions are generated using the t5-base model, while the answers are generated using the GPT-3.5-turbo model.
# Citation
```
@misc{flan-t5-xxl-arxiv-cs-ml-zeroshot-qa,
title={flan-t5-xxl-arxiv-cs-ml-zeroshot-qa},
author={Matthew Kenney},
year={2023}
}
```
| null |
Non_BioNLP
|
# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Uses](#uses)
4. [Citation](#citation)
# TL;DR
This is a FLAN-T5-XXL model trained on [ArtifactAI/arxiv-cs-ml-instruct-50k](https://huggingface.co/datasets/ArtifactAI/arxiv-cs-ml-instruct-50k). This model is for research purposes only and ***should not be used in production settings***.
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Related Models:** [All FLAN-T5 Checkpoints](https://huggingface.co/models?search=flan-t5)
# Usage
Find below some example scripts on how to use the model in `transformers`:
## Using the Pytorch model
```python
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
# Load peft config for pre-trained checkpoint etc.
peft_model_id = "ArtifactAI/flant5-xxl-math-full-training-run-one"
config = PeftConfig.from_pretrained(peft_model_id)
# load base LLM model and tokenizer
model = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path, load_in_8bit=True, device_map={"":0})
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id, device_map={"":0})
model.eval()
input_ids = tokenizer("What is the peak phase of T-eV?", return_tensors="pt", truncation=True).input_ids.cuda()
# with torch.inference_mode():
outputs = model.generate(input_ids=input_ids, max_new_tokens=1000, do_sample=True, top_p=0.9)
print(f"summary: {tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0]}")
```
## Training Data
The model was trained on [ArtifactAI/arxiv-math-instruct-50k](https://huggingface.co/datasets/ArtifactAI/arxiv-cs-ml-instruct-50k), a dataset of question/answer pairs. Questions are generated using the t5-base model, while the answers are generated using the GPT-3.5-turbo model.
# Citation
```
@misc{flan-t5-xxl-arxiv-cs-ml-zeroshot-qa,
title={flan-t5-xxl-arxiv-cs-ml-zeroshot-qa},
author={Matthew Kenney},
year={2023}
}
```
|
{"datasets": ["ArtifactAI/arxiv-cs-ml-instruct-tune-50k"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "summarization", "tags": ["arxiv"], "widget": [{"text": "What is an LSTM?", "example_title": "Question Answering"}]}
|
task
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 42,301 |
Puuje/bdaalt
|
Puuje
|
visual-question-answering
|
[
"pytorch",
"vilt",
"visual-question-answering",
"arxiv:2102.03334",
"license:apache-2.0",
"region:us"
] | 2024-12-10T15:07:30Z |
2024-12-10T15:08:07+00:00
| 13 | 0 |
---
license: apache-2.0
tags:
- visual-question-answering
widget:
- text: What's the animal doing?
src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
- text: What is on top of the building?
src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
---
# Vision-and-Language Transformer (ViLT), fine-tuned on VQAv2
Vision-and-Language Transformer (ViLT) model fine-tuned on [VQAv2](https://visualqa.org/). It was introduced in the paper [ViLT: Vision-and-Language Transformer
Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Kim et al. and first released in [this repository](https://github.com/dandelin/ViLT).
Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Intended uses & limitations
You can use the raw model for visual question answering.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import ViltProcessor, ViltForQuestionAnswering
import requests
from PIL import Image
# prepare image + question
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
text = "How many cats are there?"
processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-vqa")
model = ViltForQuestionAnswering.from_pretrained("dandelin/vilt-b32-finetuned-vqa")
# prepare inputs
encoding = processor(image, text, return_tensors="pt")
# forward pass
outputs = model(**encoding)
logits = outputs.logits
idx = logits.argmax(-1).item()
print("Predicted answer:", model.config.id2label[idx])
```
## Training data
(to do)
## Training procedure
### Preprocessing
(to do)
### Pretraining
(to do)
## Evaluation results
(to do)
### BibTeX entry and citation info
```bibtex
@misc{kim2021vilt,
title={ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision},
author={Wonjae Kim and Bokyung Son and Ildoo Kim},
year={2021},
eprint={2102.03334},
archivePrefix={arXiv},
primaryClass={stat.ML}
}
```
| null |
Non_BioNLP
|
# Vision-and-Language Transformer (ViLT), fine-tuned on VQAv2
Vision-and-Language Transformer (ViLT) model fine-tuned on [VQAv2](https://visualqa.org/). It was introduced in the paper [ViLT: Vision-and-Language Transformer
Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Kim et al. and first released in [this repository](https://github.com/dandelin/ViLT).
Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Intended uses & limitations
You can use the raw model for visual question answering.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import ViltProcessor, ViltForQuestionAnswering
import requests
from PIL import Image
# prepare image + question
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
text = "How many cats are there?"
processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-vqa")
model = ViltForQuestionAnswering.from_pretrained("dandelin/vilt-b32-finetuned-vqa")
# prepare inputs
encoding = processor(image, text, return_tensors="pt")
# forward pass
outputs = model(**encoding)
logits = outputs.logits
idx = logits.argmax(-1).item()
print("Predicted answer:", model.config.id2label[idx])
```
## Training data
(to do)
## Training procedure
### Preprocessing
(to do)
### Pretraining
(to do)
## Evaluation results
(to do)
### BibTeX entry and citation info
```bibtex
@misc{kim2021vilt,
title={ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision},
author={Wonjae Kim and Bokyung Son and Ildoo Kim},
year={2021},
eprint={2102.03334},
archivePrefix={arXiv},
primaryClass={stat.ML}
}
```
|
{"license": "apache-2.0", "tags": ["visual-question-answering"], "widget": [{"text": "What's the animal doing?", "src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg"}, {"text": "What is on top of the building?", "src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg"}]}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,302 |
Neurora/opus-tatoeba-nor-eng
|
Neurora
|
translation
|
[
"safetensors",
"marian",
"translation",
"no",
"en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2025-02-26T21:15:49Z |
2025-02-26T21:17:00+00:00
| 28 | 0 |
---
language:
- 'no'
- en
license: cc-by-nc-sa-4.0
pipeline_tag: translation
---
# Opus Tatoeba | Norwegian -> English
* dataset: opus
* model: transformer
* source language(s): nno nob
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download: [opus-2021-02-18.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-eng/opus-2021-02-18.zip)
* test set translations: [opus-2021-02-18.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-eng/opus-2021-02-18.test.txt)
* test set scores: [opus-2021-02-18.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-eng/opus-2021-02-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F | #sent | #words | BP |
|---------|-------|-------|-------|--------|----|
| Tatoeba-test.nno-eng | 47.4 | 0.625 | 460 | 3523 | 0.979 |
| Tatoeba-test.nob-eng | 57.9 | 0.721 | 4539 | 36821 | 0.972 |
| Tatoeba-test.nob_Hebr-eng | 1.2 | 0.024 | 1 | 8 | 1.000 |
| Tatoeba-test.nor-eng | 57.2 | 0.715 | 5000 | 40355 | 0.975 |
| null |
Non_BioNLP
|
# Opus Tatoeba | Norwegian -> English
* dataset: opus
* model: transformer
* source language(s): nno nob
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download: [opus-2021-02-18.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-eng/opus-2021-02-18.zip)
* test set translations: [opus-2021-02-18.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-eng/opus-2021-02-18.test.txt)
* test set scores: [opus-2021-02-18.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-eng/opus-2021-02-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F | #sent | #words | BP |
|---------|-------|-------|-------|--------|----|
| Tatoeba-test.nno-eng | 47.4 | 0.625 | 460 | 3523 | 0.979 |
| Tatoeba-test.nob-eng | 57.9 | 0.721 | 4539 | 36821 | 0.972 |
| Tatoeba-test.nob_Hebr-eng | 1.2 | 0.024 | 1 | 8 | 1.000 |
| Tatoeba-test.nor-eng | 57.2 | 0.715 | 5000 | 40355 | 0.975 |
|
{"language": ["no", "en"], "license": "cc-by-nc-sa-4.0", "pipeline_tag": "translation"}
|
task
|
[
"TRANSLATION"
] | 42,303 |
mqy/mt5-small-finetuned-14feb-5
|
mqy
|
summarization
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-02-14T11:33:43Z |
2023-02-14T15:23:17+00:00
| 16 | 0 |
---
license: apache-2.0
metrics:
- rouge
tags:
- summarization
- generated_from_trainer
model-index:
- name: mt5-small-finetuned-14feb-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-14feb-5
This model is a fine-tuned version of [mqy/mt5-small-finetuned-14feb-5](https://huggingface.co/mqy/mt5-small-finetuned-14feb-5) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4655
- Rouge1: 19.52
- Rouge2: 6.07
- Rougel: 19.19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 1.9445 | 1.0 | 388 | 2.4204 | 20.35 | 6.19 | 19.87 |
| 1.8471 | 2.0 | 776 | 2.4395 | 20.04 | 5.96 | 19.77 |
| 1.7694 | 3.0 | 1164 | 2.4479 | 20.19 | 5.81 | 19.72 |
| 1.6877 | 4.0 | 1552 | 2.4451 | 20.14 | 6.16 | 19.72 |
| 1.6436 | 5.0 | 1940 | 2.4655 | 19.52 | 6.07 | 19.19 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-14feb-5
This model is a fine-tuned version of [mqy/mt5-small-finetuned-14feb-5](https://huggingface.co/mqy/mt5-small-finetuned-14feb-5) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4655
- Rouge1: 19.52
- Rouge2: 6.07
- Rougel: 19.19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 1.9445 | 1.0 | 388 | 2.4204 | 20.35 | 6.19 | 19.87 |
| 1.8471 | 2.0 | 776 | 2.4395 | 20.04 | 5.96 | 19.77 |
| 1.7694 | 3.0 | 1164 | 2.4479 | 20.19 | 5.81 | 19.72 |
| 1.6877 | 4.0 | 1552 | 2.4451 | 20.14 | 6.16 | 19.72 |
| 1.6436 | 5.0 | 1940 | 2.4655 | 19.52 | 6.07 | 19.19 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
{"license": "apache-2.0", "metrics": ["rouge"], "tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "mt5-small-finetuned-14feb-5", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 42,304 |
hopkins/eng-ind-sim2b
|
hopkins
|
translation
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-07-23T17:40:25Z |
2023-07-23T17:57:45+00:00
| 10 | 0 |
---
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: eng-ind-sim2b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-ind-sim2b
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8229
- Bleu: 21.6699
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-ind-sim2b
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8229
- Bleu: 21.6699
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
{"metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "eng-ind-sim2b", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 42,305 |
virto/mt_5_small_kook_gen_len_20
|
virto
|
text2text-generation
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"generated_from_trainer",
"dataset:NLP-MINI-PROJECT/rabbi_kook",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-02-24T08:06:40Z |
2023-02-24T09:21:06+00:00
| 8 | 0 |
---
datasets:
- NLP-MINI-PROJECT/rabbi_kook
license: apache-2.0
metrics:
- rouge
tags:
- generated_from_trainer
model-index:
- name: mt_5_small_kook_gen_len_20
results:
- task:
type: summarization
name: Summarization
dataset:
name: NLP-MINI-PROJECT/rabbi_kook
type: NLP-MINI-PROJECT/rabbi_kook
metrics:
- type: rouge
value: 0.0
name: Rouge1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt_5_small_kook_gen_len_20
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the NLP-MINI-PROJECT/rabbi_kook dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7678
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
- Gen Len: 47.7155
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.8.1+cu111
- Datasets 2.9.0
- Tokenizers 0.11.0
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt_5_small_kook_gen_len_20
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the NLP-MINI-PROJECT/rabbi_kook dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7678
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
- Gen Len: 47.7155
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.8.1+cu111
- Datasets 2.9.0
- Tokenizers 0.11.0
|
{"datasets": ["NLP-MINI-PROJECT/rabbi_kook"], "license": "apache-2.0", "metrics": ["rouge"], "tags": ["generated_from_trainer"], "model-index": [{"name": "mt_5_small_kook_gen_len_20", "results": [{"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "NLP-MINI-PROJECT/rabbi_kook", "type": "NLP-MINI-PROJECT/rabbi_kook"}, "metrics": [{"type": "rouge", "value": 0.0, "name": "Rouge1"}]}]}]}
|
task
|
[
"SUMMARIZATION"
] | 42,306 |
FINGU-AI/FINGU-2.5-instruct-32B
|
FINGU-AI
| null |
[
"safetensors",
"qwen2",
"arxiv:2202.01764",
"license:mit",
"region:us"
] | 2025-02-03T09:56:45Z |
2025-02-03T12:22:07+00:00
| 19 | 1 |
---
license: mit
---
# FINGU-AI/FINGU-2.5-instruct-32B
## Overview
`FINGU-AI/FINGU-2.5-instruct-32B` is a versatile causal language model designed to excel in various natural language processing (NLP) tasks, including machine translation, text generation, and chat-based applications. The model demonstrates a strong aptitude for reasoning tasks, particularly in the Japanese language, making it a valuable tool for applications requiring logical inference and complex understanding.
## Reasoning Capabilities
The model's architecture and training regimen have been optimized to enhance its reasoning abilities. This is particularly evident in tasks involving logical deduction and commonsense reasoning in Japanese. For instance, when evaluated on datasets such as JaQuAD—a Japanese Question Answering Dataset—the model exhibits a nuanced understanding of complex logical structures. :contentReference[oaicite:0]{index=0}
Additionally, `FINGU-AI/FINGU-2.5-instruct-32B` has been assessed using the JFLD benchmark, which tests a model's ability for deductive reasoning based on formal logic. The model's performance indicates a robust capacity to handle tasks that require understanding and reasoning over formal logical structures.
## Example Usage
### Installation
Ensure that the required packages are installed:
```python
pip install torch transformers
```
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Model and Tokenizer
model_id = 'FINGU-AI/FINGU-2.5-instruct-32B'
model = AutoModelForCausalLM.from_pretrained(model_id, attn_implementation="sdpa", torch_dtype=torch.float16, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained(model_id)
model.to('cuda')
# Input Messages for Translation
messages = [
{"role": "user", "content": """Please reason step by step, and put your final answer within \boxed{}.
translate korean to Japanese.
새로운 은행 계좌를 개설하는 절차는 다음과 같습니다:
1. 계좌 개설 목적과 신분 확인을 위한 서류 제출
2. 서류 검토 과정을 거치는 것
3. 고객님의 신원 확인 절차를 진행하는 것
4. 모든 절차가 완료되면 계좌 개설이 가능합니다.
계좌 개설을 원하시는 경우, 신분증과 함께 방문해 주시면 됩니다.
"""}
]
# Tokenize and Generate Response
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
outputs = model.generate(
input_ids,
max_new_tokens=500,
do_sample=True,
)
# Decode and Print the Response
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
## Relevant Datasets
To further evaluate and enhance the reasoning capabilities of `FINGU-AI/FINGU-2.5-instruct-32B`, the following Japanese reasoning datasets are pertinent:
- **JaQuAD (Japanese Question Answering Dataset)**: A human-annotated dataset created for Japanese Machine Reading Comprehension, consisting of 39,696 extractive question-answer pairs on Japanese Wikipedia articles.
[📄 ARXIV.ORG](https://arxiv.org/abs/2202.01764)
- **JFLD (Japanese Formal Logic Dataset)**: A benchmark designed to evaluate deductive reasoning based on formal logic, providing a structured framework to assess logical reasoning capabilities in Japanese.
[📄 ACLANTHOLOGY.ORG](https://aclanthology.org/2024.lrec-main.832.pdf)
- **JEMHopQA (Japanese Explainable Multi-Hop Question-Answering)**: A dataset for multi-hop QA in Japanese, including question-answer pairs and supporting evidence in the form of derivation triples, facilitating the development of explainable QA systems.
[📄 ACLANTHOLOGY.ORG](https://aclanthology.org/2024.lrec-main.831.pdf)
These datasets provide diverse challenges that can help in assessing and improving the model's reasoning abilities across different contexts and complexities.
## Conclusion
`FINGU-AI/FINGU-2.5-instruct-32B` stands as a robust and adaptable language model, particularly distinguished by its reasoning capabilities in the Japanese language. Its performance across various reasoning benchmarks underscores its potential for applications that demand advanced logical inference and nuanced understanding in NLP tasks.
| null |
Non_BioNLP
|
# FINGU-AI/FINGU-2.5-instruct-32B
## Overview
`FINGU-AI/FINGU-2.5-instruct-32B` is a versatile causal language model designed to excel in various natural language processing (NLP) tasks, including machine translation, text generation, and chat-based applications. The model demonstrates a strong aptitude for reasoning tasks, particularly in the Japanese language, making it a valuable tool for applications requiring logical inference and complex understanding.
## Reasoning Capabilities
The model's architecture and training regimen have been optimized to enhance its reasoning abilities. This is particularly evident in tasks involving logical deduction and commonsense reasoning in Japanese. For instance, when evaluated on datasets such as JaQuAD—a Japanese Question Answering Dataset—the model exhibits a nuanced understanding of complex logical structures. :contentReference[oaicite:0]{index=0}
Additionally, `FINGU-AI/FINGU-2.5-instruct-32B` has been assessed using the JFLD benchmark, which tests a model's ability for deductive reasoning based on formal logic. The model's performance indicates a robust capacity to handle tasks that require understanding and reasoning over formal logical structures.
## Example Usage
### Installation
Ensure that the required packages are installed:
```python
pip install torch transformers
```
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Model and Tokenizer
model_id = 'FINGU-AI/FINGU-2.5-instruct-32B'
model = AutoModelForCausalLM.from_pretrained(model_id, attn_implementation="sdpa", torch_dtype=torch.float16, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained(model_id)
model.to('cuda')
# Input Messages for Translation
messages = [
{"role": "user", "content": """Please reason step by step, and put your final answer within \boxed{}.
translate korean to Japanese.
새로운 은행 계좌를 개설하는 절차는 다음과 같습니다:
1. 계좌 개설 목적과 신분 확인을 위한 서류 제출
2. 서류 검토 과정을 거치는 것
3. 고객님의 신원 확인 절차를 진행하는 것
4. 모든 절차가 완료되면 계좌 개설이 가능합니다.
계좌 개설을 원하시는 경우, 신분증과 함께 방문해 주시면 됩니다.
"""}
]
# Tokenize and Generate Response
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
outputs = model.generate(
input_ids,
max_new_tokens=500,
do_sample=True,
)
# Decode and Print the Response
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
## Relevant Datasets
To further evaluate and enhance the reasoning capabilities of `FINGU-AI/FINGU-2.5-instruct-32B`, the following Japanese reasoning datasets are pertinent:
- **JaQuAD (Japanese Question Answering Dataset)**: A human-annotated dataset created for Japanese Machine Reading Comprehension, consisting of 39,696 extractive question-answer pairs on Japanese Wikipedia articles.
[📄 ARXIV.ORG](https://arxiv.org/abs/2202.01764)
- **JFLD (Japanese Formal Logic Dataset)**: A benchmark designed to evaluate deductive reasoning based on formal logic, providing a structured framework to assess logical reasoning capabilities in Japanese.
[📄 ACLANTHOLOGY.ORG](https://aclanthology.org/2024.lrec-main.832.pdf)
- **JEMHopQA (Japanese Explainable Multi-Hop Question-Answering)**: A dataset for multi-hop QA in Japanese, including question-answer pairs and supporting evidence in the form of derivation triples, facilitating the development of explainable QA systems.
[📄 ACLANTHOLOGY.ORG](https://aclanthology.org/2024.lrec-main.831.pdf)
These datasets provide diverse challenges that can help in assessing and improving the model's reasoning abilities across different contexts and complexities.
## Conclusion
`FINGU-AI/FINGU-2.5-instruct-32B` stands as a robust and adaptable language model, particularly distinguished by its reasoning capabilities in the Japanese language. Its performance across various reasoning benchmarks underscores its potential for applications that demand advanced logical inference and nuanced understanding in NLP tasks.
|
{"license": "mit"}
|
task
|
[
"QUESTION_ANSWERING",
"TRANSLATION"
] | 42,307 |
aambrioso/distilbert-base-uncased-finetuned-emotion
|
aambrioso
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-08-14T15:19:50Z |
2022-08-14T16:11:45+00:00
| 16 | 0 |
---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- type: accuracy
value: 0.9275
name: Accuracy
- type: f1
value: 0.9272701341146076
name: F1
---
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [emotion]( dataset(https://huggingface.co/datasets/emotion) dataset for in the dataset in HG. It achieves the following results on the evaluation set:
- Loss: 0.2033
- Accuracy: 0.9275
- F1: 0.9273
## Model description
This model is a copy of the model found in the book [Natural Language Processing with Transformers](https://github.com/nlp-with-transformers/notebooks/blob/main/02_classification.ipynb).
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.806 | 1.0 | 250 | 0.2954 | 0.908 | 0.9062 |
| 0.2361 | 2.0 | 500 | 0.2033 | 0.9275 | 0.9273 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
| null |
Non_BioNLP
|
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [emotion]( dataset(https://huggingface.co/datasets/emotion) dataset for in the dataset in HG. It achieves the following results on the evaluation set:
- Loss: 0.2033
- Accuracy: 0.9275
- F1: 0.9273
## Model description
This model is a copy of the model found in the book [Natural Language Processing with Transformers](https://github.com/nlp-with-transformers/notebooks/blob/main/02_classification.ipynb).
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.806 | 1.0 | 250 | 0.2954 | 0.908 | 0.9062 |
| 0.2361 | 2.0 | 500 | 0.2033 | 0.9275 | 0.9273 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9275, "name": "Accuracy"}, {"type": "f1", "value": 0.9272701341146076, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,308 |
oguuzhansahin/flan-t5-large-samsum
|
oguuzhansahin
|
summarization
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"summarization",
"dataset:samsum",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-03-05T19:00:53Z |
2023-03-06T09:29:03+00:00
| 27 | 2 |
---
datasets:
- samsum
metrics:
- rouge
pipeline_tag: summarization
---
### Usage
```
from transformers import pipeline
summarizer = pipeline("summarization", model="oguuzhansahin/flan-t5-large-samsum", device=0)
sample_dialogue = "Barbara: got everything?
Haylee: yeah almost
Haylee: i'm in dairy section
Haylee: but can't find this youghurt u wanted
Barbara: the coconut milk one? Haylee: yeah
Barbara: hmmm yeah that's a mystery. cause it's not dairy but it's yoghurt xD
Haylee: exactly xD Haylee: ok i asked sb. they put it next to eggs lol
Barbara: lol"
res = summarizer(sample)
print(res)
```
### Expected Output
```
[{'summary_text': "Haylee is in the dairy section. She can't find the coconut milk yog"}]
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 2023
- num_epochs: 5
- MAX_LENGTH_DIALOGUE = 768
- MAX_LENGTH_SUMMARY = 128
### Model Performance
| Epoch | Training Loss | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:------|----------------:|----------------:|-----------:|----------:|------------:|------------:|
| 1 | 1.182841 | 1.202841 | 48.847000 | 25.428200 | 41.734300 | 44.999900 |
| 2 | 1.029400 | 1.217544 | 49.175000 | 25.914800 | 41.729000 | 45.258300 |
| 3 | 0.902600 | 1.239609 | 49.177600 | 25.581100 | 41.680700 | 44.997300 |
| 4 | 0.808000 | 1.274836 | 49.310200 | 25.902800 | 42.103600 | 45.485000 |
| 5 | 0.748200 | 1.304448 | 49.154700 | 25.520400 | 41.904900 | 45.234200 |
| null |
Non_BioNLP
|
### Usage
```
from transformers import pipeline
summarizer = pipeline("summarization", model="oguuzhansahin/flan-t5-large-samsum", device=0)
sample_dialogue = "Barbara: got everything?
Haylee: yeah almost
Haylee: i'm in dairy section
Haylee: but can't find this youghurt u wanted
Barbara: the coconut milk one? Haylee: yeah
Barbara: hmmm yeah that's a mystery. cause it's not dairy but it's yoghurt xD
Haylee: exactly xD Haylee: ok i asked sb. they put it next to eggs lol
Barbara: lol"
res = summarizer(sample)
print(res)
```
### Expected Output
```
[{'summary_text': "Haylee is in the dairy section. She can't find the coconut milk yog"}]
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 2023
- num_epochs: 5
- MAX_LENGTH_DIALOGUE = 768
- MAX_LENGTH_SUMMARY = 128
### Model Performance
| Epoch | Training Loss | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:------|----------------:|----------------:|-----------:|----------:|------------:|------------:|
| 1 | 1.182841 | 1.202841 | 48.847000 | 25.428200 | 41.734300 | 44.999900 |
| 2 | 1.029400 | 1.217544 | 49.175000 | 25.914800 | 41.729000 | 45.258300 |
| 3 | 0.902600 | 1.239609 | 49.177600 | 25.581100 | 41.680700 | 44.997300 |
| 4 | 0.808000 | 1.274836 | 49.310200 | 25.902800 | 42.103600 | 45.485000 |
| 5 | 0.748200 | 1.304448 | 49.154700 | 25.520400 | 41.904900 | 45.234200 |
|
{"datasets": ["samsum"], "metrics": ["rouge"], "pipeline_tag": "summarization"}
|
task
|
[
"SUMMARIZATION"
] | 42,309 |
google/roberta2roberta_L-24_cnn_daily_mail
|
google
|
summarization
|
[
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"summarization",
"en",
"dataset:cnn_dailymail",
"arxiv:1907.12461",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2023-01-24T16:43:15+00:00
| 514 | 6 |
---
datasets:
- cnn_dailymail
language: en
license: apache-2.0
tags:
- summarization
---
# Roberta2Roberta_L-24_cnn_daily_mail EncoderDecoder model
The model was introduced in
[this paper](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn and first released in [this repository](https://tfhub.dev/google/bertseq2seq/roberta24_cnndm/1).
The model is an encoder-decoder model that was initialized on the `roberta-large` checkpoints for both the encoder
and decoder and fine-tuned on summarization on the CNN / Dailymail dataset, which is linked above.
Disclaimer: The model card has been written by the Hugging Face team.
## How to use
You can use this model for summarization, *e.g.*
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_cnn_daily_mail")
model = AutoModelForSeq2SeqLM.from_pretrained("google/roberta2roberta_L-24_cnn_daily_mail")
article = """ (The Hollywood Reporter)"The Rocky Horror Picture
Show" is the latest musical getting the small-
screen treatment. Fox is developing a two-hour
remake of the 1975 cult classic to be directed,
executive-produced and choreographed by Kenneth
Ortega ("High School Musical"). The project,
tentatively titled "The Rocky Horror Picture Show
Event," is casting-contingent. The special will be
filmed in advance and not air live, but few
details beyond that are known. In addition to
Ortega, Gail Berman and Lou Adler, who produced
the original film, are also attached as executive
producers. The special will be produced by Fox 21
Television Studios, and Berman's The Jackal Group.
The special is timed to celebrate the 40th
anniversary of the film, which has grossed more
than $112 million and still plays in theaters
across the country. TV premiere dates: The
complete guide . This isn't the first stab at
adapting "The Rocky Horror Picture Show." In 2002,
Fox unveiled plans for an adaptation timed to the
30th anniversary that never came to fruition. The
faces of pilot season 2015 . Fox's "Glee" covered
several of the show's most popular songs for a
Season 2 episode and even released a special "The
Rocky Horror Glee Show" EP. There is no plan yet
for when the adaptation will air. Fox also has a
live musical production of "Grease", starring
Julianne Hough and Vanessa Hudgens, scheduled to
air on Jan. 31, 2016. Broadcast TV scorecard .
Following in the footsteps of "The Sound of Music"
and "Peter Pan," NBC recently announced plans to
air a live version of The Wiz later this year.
Ortega's credits include "Gilmore Girls," "This Is
It" and "Hocus Pocus." He is repped by Paradigm
and Hanson, Jacobson. ©2015 The Hollywood
Reporter. All rights reserved."""
input_ids = tokenizer(article, return_tensors="pt").input_ids
output_ids = model.generate(input_ids)[0]
print(tokenizer.decode(output_ids, skip_special_tokens=True))
# should output
# Fox is developing a two-hour remake of the 1975 cult classic. The special will be directed, executive-produced and choreographed by Kenneth Ortega.
# The special is timed to celebrate the 40th anniversary of the film, which has grossed more than $112 million.
```
| null |
Non_BioNLP
|
# Roberta2Roberta_L-24_cnn_daily_mail EncoderDecoder model
The model was introduced in
[this paper](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn and first released in [this repository](https://tfhub.dev/google/bertseq2seq/roberta24_cnndm/1).
The model is an encoder-decoder model that was initialized on the `roberta-large` checkpoints for both the encoder
and decoder and fine-tuned on summarization on the CNN / Dailymail dataset, which is linked above.
Disclaimer: The model card has been written by the Hugging Face team.
## How to use
You can use this model for summarization, *e.g.*
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_cnn_daily_mail")
model = AutoModelForSeq2SeqLM.from_pretrained("google/roberta2roberta_L-24_cnn_daily_mail")
article = """ (The Hollywood Reporter)"The Rocky Horror Picture
Show" is the latest musical getting the small-
screen treatment. Fox is developing a two-hour
remake of the 1975 cult classic to be directed,
executive-produced and choreographed by Kenneth
Ortega ("High School Musical"). The project,
tentatively titled "The Rocky Horror Picture Show
Event," is casting-contingent. The special will be
filmed in advance and not air live, but few
details beyond that are known. In addition to
Ortega, Gail Berman and Lou Adler, who produced
the original film, are also attached as executive
producers. The special will be produced by Fox 21
Television Studios, and Berman's The Jackal Group.
The special is timed to celebrate the 40th
anniversary of the film, which has grossed more
than $112 million and still plays in theaters
across the country. TV premiere dates: The
complete guide . This isn't the first stab at
adapting "The Rocky Horror Picture Show." In 2002,
Fox unveiled plans for an adaptation timed to the
30th anniversary that never came to fruition. The
faces of pilot season 2015 . Fox's "Glee" covered
several of the show's most popular songs for a
Season 2 episode and even released a special "The
Rocky Horror Glee Show" EP. There is no plan yet
for when the adaptation will air. Fox also has a
live musical production of "Grease", starring
Julianne Hough and Vanessa Hudgens, scheduled to
air on Jan. 31, 2016. Broadcast TV scorecard .
Following in the footsteps of "The Sound of Music"
and "Peter Pan," NBC recently announced plans to
air a live version of The Wiz later this year.
Ortega's credits include "Gilmore Girls," "This Is
It" and "Hocus Pocus." He is repped by Paradigm
and Hanson, Jacobson. ©2015 The Hollywood
Reporter. All rights reserved."""
input_ids = tokenizer(article, return_tensors="pt").input_ids
output_ids = model.generate(input_ids)[0]
print(tokenizer.decode(output_ids, skip_special_tokens=True))
# should output
# Fox is developing a two-hour remake of the 1975 cult classic. The special will be directed, executive-produced and choreographed by Kenneth Ortega.
# The special is timed to celebrate the 40th anniversary of the film, which has grossed more than $112 million.
```
|
{"datasets": ["cnn_dailymail"], "language": "en", "license": "apache-2.0", "tags": ["summarization"]}
|
task
|
[
"SUMMARIZATION"
] | 42,310 |
souvenger/NLP2Linux
|
souvenger
|
text-classification
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] | 2024-02-08T07:09:07Z |
2024-02-08T07:09:20+00:00
| 5 | 0 |
---
base_model: sentence-transformers/paraphrase-mpnet-base-v2
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: Upgrade all installed packages with superuser privileges
- text: Install package 'vim' as superuser
- text: Remove package 'firefox' with superuser privileges
- text: Change permissions of directory 'docs' to writable
- text: Update package lists using superuser privileges
inference: true
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.0
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 30 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:----------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------|
| ls | <ul><li>'List all files and directories'</li><li>'Show files in the current directory'</li><li>'Display contents of the current directory'</li></ul> |
| cd | <ul><li>'Change to the specified directory'</li><li>'Move to the home directory'</li><li>'Navigate to the specified directory path'</li></ul> |
| mkdir docs | <ul><li>"Create a new directory named 'docs'"</li></ul> |
| mkdir projects | <ul><li>"Make a directory named 'projects'"</li></ul> |
| mkdir data | <ul><li>"Create a folder called 'data'"</li></ul> |
| mkdir images | <ul><li>"Make a directory named 'images'"</li></ul> |
| mkdir scripts | <ul><li>"Create a new folder named 'scripts'"</li></ul> |
| rm example.txt | <ul><li>"Remove the file named 'example.txt'"</li></ul> |
| rm temp.txt | <ul><li>"Delete the file called 'temp.txt'"</li></ul> |
| rm file1 | <ul><li>"Remove the file named 'file1'"</li></ul> |
| rm file2 | <ul><li>"Delete the file named 'file2'"</li></ul> |
| rm backup.txt | <ul><li>"Remove the file named 'backup.txt'"</li></ul> |
| cp file1 /destination | <ul><li>'Copy file1 to directory /destination'</li></ul> |
| cp file2 /backup | <ul><li>'Duplicate file2 to directory /backup'</li></ul> |
| cp file3 /archive | <ul><li>'Copy file3 to folder /archive'</li></ul> |
| cp file4 /temp | <ul><li>'Duplicate file4 to folder /temp'</li></ul> |
| cp file5 /images | <ul><li>'Copy file5 to directory /images'</li></ul> |
| mv file2 /new_location | <ul><li>'Move file2 to directory /new_location'</li></ul> |
| mv file3 /backup | <ul><li>'Transfer file3 to directory /backup'</li></ul> |
| mv file4 /archive | <ul><li>'Move file4 to folder /archive'</li></ul> |
| mv file5 /temp | <ul><li>'Transfer file5 to folder /temp'</li></ul> |
| mv file6 /images | <ul><li>'Move file6 to directory /images'</li></ul> |
| cat README.md | <ul><li>"Display the contents of file 'README.md'"</li></ul> |
| cat notes.txt | <ul><li>"Show the content of file 'notes.txt'"</li></ul> |
| cat data.csv | <ul><li>"Print the contents of file 'data.csv'"</li></ul> |
| cat script.sh | <ul><li>"Display the content of file 'script.sh'"</li></ul> |
| cat config.ini | <ul><li>"Show the contents of file 'config.ini'"</li></ul> |
| grep 'pattern' data.txt | <ul><li>"Search for 'pattern' in file 'data.txt'"</li></ul> |
| grep 'word' text.txt | <ul><li>"Find occurrences of 'word' in file 'text.txt'"</li></ul> |
| grep 'keyword' document.txt | <ul><li>"Search for 'keyword' in file 'document.txt'"</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.0 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("souvenger/NLP2Linux")
# Run inference
preds = model("Install package 'vim' as superuser")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 5 | 5.6667 | 9 |
| Label | Training Sample Count |
|:----------------------------|:----------------------|
| cat README.md | 1 |
| cat config.ini | 1 |
| cat data.csv | 1 |
| cat notes.txt | 1 |
| cat script.sh | 1 |
| cd | 10 |
| cp file1 /destination | 1 |
| cp file2 /backup | 1 |
| cp file3 /archive | 1 |
| cp file4 /temp | 1 |
| cp file5 /images | 1 |
| grep 'keyword' document.txt | 1 |
| grep 'pattern' data.txt | 1 |
| grep 'word' text.txt | 1 |
| ls | 10 |
| mkdir data | 1 |
| mkdir docs | 1 |
| mkdir images | 1 |
| mkdir projects | 1 |
| mkdir scripts | 1 |
| mv file2 /new_location | 1 |
| mv file3 /backup | 1 |
| mv file4 /archive | 1 |
| mv file5 /temp | 1 |
| mv file6 /images | 1 |
| rm backup.txt | 1 |
| rm example.txt | 1 |
| rm file1 | 1 |
| rm file2 | 1 |
| rm temp.txt | 1 |
### Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0042 | 1 | 0.1215 | - |
| 0.2083 | 50 | 0.0232 | - |
| 0.4167 | 100 | 0.01 | - |
| 0.625 | 150 | 0.0044 | - |
| 0.8333 | 200 | 0.0025 | - |
### Framework Versions
- Python: 3.10.13
- SetFit: 1.0.3
- Sentence Transformers: 2.3.1
- Transformers: 4.37.0
- PyTorch: 2.1.2
- Datasets: 2.1.0
- Tokenizers: 0.15.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 30 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:----------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------|
| ls | <ul><li>'List all files and directories'</li><li>'Show files in the current directory'</li><li>'Display contents of the current directory'</li></ul> |
| cd | <ul><li>'Change to the specified directory'</li><li>'Move to the home directory'</li><li>'Navigate to the specified directory path'</li></ul> |
| mkdir docs | <ul><li>"Create a new directory named 'docs'"</li></ul> |
| mkdir projects | <ul><li>"Make a directory named 'projects'"</li></ul> |
| mkdir data | <ul><li>"Create a folder called 'data'"</li></ul> |
| mkdir images | <ul><li>"Make a directory named 'images'"</li></ul> |
| mkdir scripts | <ul><li>"Create a new folder named 'scripts'"</li></ul> |
| rm example.txt | <ul><li>"Remove the file named 'example.txt'"</li></ul> |
| rm temp.txt | <ul><li>"Delete the file called 'temp.txt'"</li></ul> |
| rm file1 | <ul><li>"Remove the file named 'file1'"</li></ul> |
| rm file2 | <ul><li>"Delete the file named 'file2'"</li></ul> |
| rm backup.txt | <ul><li>"Remove the file named 'backup.txt'"</li></ul> |
| cp file1 /destination | <ul><li>'Copy file1 to directory /destination'</li></ul> |
| cp file2 /backup | <ul><li>'Duplicate file2 to directory /backup'</li></ul> |
| cp file3 /archive | <ul><li>'Copy file3 to folder /archive'</li></ul> |
| cp file4 /temp | <ul><li>'Duplicate file4 to folder /temp'</li></ul> |
| cp file5 /images | <ul><li>'Copy file5 to directory /images'</li></ul> |
| mv file2 /new_location | <ul><li>'Move file2 to directory /new_location'</li></ul> |
| mv file3 /backup | <ul><li>'Transfer file3 to directory /backup'</li></ul> |
| mv file4 /archive | <ul><li>'Move file4 to folder /archive'</li></ul> |
| mv file5 /temp | <ul><li>'Transfer file5 to folder /temp'</li></ul> |
| mv file6 /images | <ul><li>'Move file6 to directory /images'</li></ul> |
| cat README.md | <ul><li>"Display the contents of file 'README.md'"</li></ul> |
| cat notes.txt | <ul><li>"Show the content of file 'notes.txt'"</li></ul> |
| cat data.csv | <ul><li>"Print the contents of file 'data.csv'"</li></ul> |
| cat script.sh | <ul><li>"Display the content of file 'script.sh'"</li></ul> |
| cat config.ini | <ul><li>"Show the contents of file 'config.ini'"</li></ul> |
| grep 'pattern' data.txt | <ul><li>"Search for 'pattern' in file 'data.txt'"</li></ul> |
| grep 'word' text.txt | <ul><li>"Find occurrences of 'word' in file 'text.txt'"</li></ul> |
| grep 'keyword' document.txt | <ul><li>"Search for 'keyword' in file 'document.txt'"</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.0 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("souvenger/NLP2Linux")
# Run inference
preds = model("Install package 'vim' as superuser")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 5 | 5.6667 | 9 |
| Label | Training Sample Count |
|:----------------------------|:----------------------|
| cat README.md | 1 |
| cat config.ini | 1 |
| cat data.csv | 1 |
| cat notes.txt | 1 |
| cat script.sh | 1 |
| cd | 10 |
| cp file1 /destination | 1 |
| cp file2 /backup | 1 |
| cp file3 /archive | 1 |
| cp file4 /temp | 1 |
| cp file5 /images | 1 |
| grep 'keyword' document.txt | 1 |
| grep 'pattern' data.txt | 1 |
| grep 'word' text.txt | 1 |
| ls | 10 |
| mkdir data | 1 |
| mkdir docs | 1 |
| mkdir images | 1 |
| mkdir projects | 1 |
| mkdir scripts | 1 |
| mv file2 /new_location | 1 |
| mv file3 /backup | 1 |
| mv file4 /archive | 1 |
| mv file5 /temp | 1 |
| mv file6 /images | 1 |
| rm backup.txt | 1 |
| rm example.txt | 1 |
| rm file1 | 1 |
| rm file2 | 1 |
| rm temp.txt | 1 |
### Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0042 | 1 | 0.1215 | - |
| 0.2083 | 50 | 0.0232 | - |
| 0.4167 | 100 | 0.01 | - |
| 0.625 | 150 | 0.0044 | - |
| 0.8333 | 200 | 0.0025 | - |
### Framework Versions
- Python: 3.10.13
- SetFit: 1.0.3
- Sentence Transformers: 2.3.1
- Transformers: 4.37.0
- PyTorch: 2.1.2
- Datasets: 2.1.0
- Tokenizers: 0.15.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "sentence-transformers/paraphrase-mpnet-base-v2", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "Upgrade all installed packages with superuser privileges"}, {"text": "Install package 'vim' as superuser"}, {"text": "Remove package 'firefox' with superuser privileges"}, {"text": "Change permissions of directory 'docs' to writable"}, {"text": "Update package lists using superuser privileges"}], "inference": true, "model-index": [{"name": "SetFit with sentence-transformers/paraphrase-mpnet-base-v2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.0, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,311 |
muhtasham/medium-mlm-tweet-target-tweet
|
muhtasham
|
text-classification
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-12-11T07:25:17Z |
2022-12-11T07:30:40+00:00
| 116 | 0 |
---
datasets:
- tweet_eval
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: medium-mlm-tweet-target-tweet
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: train
args: emotion
metrics:
- type: accuracy
value: 0.7593582887700535
name: Accuracy
- type: f1
value: 0.7637254221785755
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medium-mlm-tweet-target-tweet
This model is a fine-tuned version of [muhtasham/medium-mlm-tweet](https://huggingface.co/muhtasham/medium-mlm-tweet) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9066
- Accuracy: 0.7594
- F1: 0.7637
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4702 | 4.9 | 500 | 0.8711 | 0.7540 | 0.7532 |
| 0.0629 | 9.8 | 1000 | 1.2918 | 0.7701 | 0.7668 |
| 0.0227 | 14.71 | 1500 | 1.4801 | 0.7727 | 0.7696 |
| 0.0181 | 19.61 | 2000 | 1.5118 | 0.7888 | 0.7870 |
| 0.0114 | 24.51 | 2500 | 1.6747 | 0.7754 | 0.7745 |
| 0.0141 | 29.41 | 3000 | 1.8765 | 0.7674 | 0.7628 |
| 0.0177 | 34.31 | 3500 | 1.9066 | 0.7594 | 0.7637 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medium-mlm-tweet-target-tweet
This model is a fine-tuned version of [muhtasham/medium-mlm-tweet](https://huggingface.co/muhtasham/medium-mlm-tweet) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9066
- Accuracy: 0.7594
- F1: 0.7637
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4702 | 4.9 | 500 | 0.8711 | 0.7540 | 0.7532 |
| 0.0629 | 9.8 | 1000 | 1.2918 | 0.7701 | 0.7668 |
| 0.0227 | 14.71 | 1500 | 1.4801 | 0.7727 | 0.7696 |
| 0.0181 | 19.61 | 2000 | 1.5118 | 0.7888 | 0.7870 |
| 0.0114 | 24.51 | 2500 | 1.6747 | 0.7754 | 0.7745 |
| 0.0141 | 29.41 | 3000 | 1.8765 | 0.7674 | 0.7628 |
| 0.0177 | 34.31 | 3500 | 1.9066 | 0.7594 | 0.7637 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
{"datasets": ["tweet_eval"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "medium-mlm-tweet-target-tweet", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "config": "emotion", "split": "train", "args": "emotion"}, "metrics": [{"type": "accuracy", "value": 0.7593582887700535, "name": "Accuracy"}, {"type": "f1", "value": 0.7637254221785755, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,312 |
sqrk/Sep26-Mixat-whisper-lg-3-translation
|
sqrk
| null |
[
"safetensors",
"whisper",
"generated_from_trainer",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"region:us"
] | 2024-09-26T22:31:29Z |
2024-09-27T09:43:07+00:00
| 5 | 0 |
---
base_model: openai/whisper-large-v3
license: apache-2.0
metrics:
- wer
tags:
- generated_from_trainer
model-index:
- name: Sep26-Mixat-whisper-lg-3-translation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Sep26-Mixat-whisper-lg-3-translation
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7932
- Wer: 42.6353
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.8768 | 0.4292 | 100 | 0.4849 | 38.3799 |
| 0.5884 | 0.8584 | 200 | 0.4886 | 37.6625 |
| 0.4802 | 1.2876 | 300 | 0.4899 | 42.7189 |
| 0.4519 | 1.7167 | 400 | 0.5002 | 42.3724 |
| 0.4173 | 2.1459 | 500 | 0.5083 | 43.9228 |
| 0.3271 | 2.5751 | 600 | 0.5200 | 41.2447 |
| 0.3292 | 3.0043 | 700 | 0.5020 | 41.7533 |
| 0.1963 | 3.4335 | 800 | 0.5670 | 43.8933 |
| 0.2076 | 3.8627 | 900 | 0.5536 | 42.9842 |
| 0.1413 | 4.2918 | 1000 | 0.5866 | 42.1439 |
| 0.1194 | 4.7210 | 1100 | 0.6091 | 43.5739 |
| 0.0994 | 5.1502 | 1200 | 0.6991 | 42.6722 |
| 0.067 | 5.5794 | 1300 | 0.6573 | 44.6869 |
| 0.0699 | 6.0086 | 1400 | 0.6579 | 44.4363 |
| 0.0386 | 6.4378 | 1500 | 0.7268 | 46.2249 |
| 0.0414 | 6.8670 | 1600 | 0.7219 | 44.3527 |
| 0.0334 | 7.2961 | 1700 | 0.7521 | 45.5763 |
| 0.0308 | 7.7253 | 1800 | 0.7932 | 42.6353 |
### Framework versions
- Transformers 4.43.4
- Pytorch 2.4.1
- Datasets 3.0.0
- Tokenizers 0.19.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Sep26-Mixat-whisper-lg-3-translation
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7932
- Wer: 42.6353
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.8768 | 0.4292 | 100 | 0.4849 | 38.3799 |
| 0.5884 | 0.8584 | 200 | 0.4886 | 37.6625 |
| 0.4802 | 1.2876 | 300 | 0.4899 | 42.7189 |
| 0.4519 | 1.7167 | 400 | 0.5002 | 42.3724 |
| 0.4173 | 2.1459 | 500 | 0.5083 | 43.9228 |
| 0.3271 | 2.5751 | 600 | 0.5200 | 41.2447 |
| 0.3292 | 3.0043 | 700 | 0.5020 | 41.7533 |
| 0.1963 | 3.4335 | 800 | 0.5670 | 43.8933 |
| 0.2076 | 3.8627 | 900 | 0.5536 | 42.9842 |
| 0.1413 | 4.2918 | 1000 | 0.5866 | 42.1439 |
| 0.1194 | 4.7210 | 1100 | 0.6091 | 43.5739 |
| 0.0994 | 5.1502 | 1200 | 0.6991 | 42.6722 |
| 0.067 | 5.5794 | 1300 | 0.6573 | 44.6869 |
| 0.0699 | 6.0086 | 1400 | 0.6579 | 44.4363 |
| 0.0386 | 6.4378 | 1500 | 0.7268 | 46.2249 |
| 0.0414 | 6.8670 | 1600 | 0.7219 | 44.3527 |
| 0.0334 | 7.2961 | 1700 | 0.7521 | 45.5763 |
| 0.0308 | 7.7253 | 1800 | 0.7932 | 42.6353 |
### Framework versions
- Transformers 4.43.4
- Pytorch 2.4.1
- Datasets 3.0.0
- Tokenizers 0.19.1
|
{"base_model": "openai/whisper-large-v3", "license": "apache-2.0", "metrics": ["wer"], "tags": ["generated_from_trainer"], "model-index": [{"name": "Sep26-Mixat-whisper-lg-3-translation", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 42,313 |
Helsinki-NLP/opus-mt-fi-ZH
|
Helsinki-NLP
|
translation
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fi",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2023-08-16T11:34:10+00:00
| 93 | 1 |
---
license: apache-2.0
tags:
- translation
---
### opus-mt-fi-ZH
* source languages: fi
* target languages: cmn,cn,yue,ze_zh,zh_cn,zh_CN,zh_HK,zh_tw,zh_TW,zh_yue,zhs,zht,zh
* OPUS readme: [fi-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| bible-uedin.fi.zh | 23.4 | 0.326 |
| null |
Non_BioNLP
|
### opus-mt-fi-ZH
* source languages: fi
* target languages: cmn,cn,yue,ze_zh,zh_cn,zh_CN,zh_HK,zh_tw,zh_TW,zh_yue,zhs,zht,zh
* OPUS readme: [fi-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| bible-uedin.fi.zh | 23.4 | 0.326 |
|
{"license": "apache-2.0", "tags": ["translation"]}
|
task
|
[
"TRANSLATION"
] | 42,314 |
rohjar-ai-org/tweetnlp_sentiment_1
|
rohjar-ai-org
|
text-classification
|
[
"pytorch",
"roberta",
"text-classification",
"dataset:None",
"model-index",
"region:us"
] | 2024-11-29T13:24:58Z |
2024-12-13T16:08:27+00:00
| 14 | 0 |
---
datasets:
- None
metrics:
- f1
- accuracy
pipeline_tag: text-classification
widget:
- text: Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@}
via {@bluenoterecords@} link below {{URL}}
example_title: topic_classification 1
- text: Yes, including Medicare and social security saving👍
example_title: sentiment 1
- text: All two of them taste like ass.
example_title: offensive 1
- text: If you wanna look like a badass, have drama on social media
example_title: irony 1
- text: Whoever just unfollowed me you a bitch
example_title: hate 1
- text: I love swimming for the same reason I love meditating...the feeling of weightlessness.
example_title: emotion 1
- text: Beautiful sunset last night from the pontoon @TupperLakeNY
example_title: emoji 1
model-index:
- name: rohjar-ai-org/tweetnlp_sentiment_1
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: None
type: None
split: test
metrics:
- type: micro_f1_None
value: 0.6834532374100719
name: Micro F1 (None)
- type: micro_f1_None
value: 0.6033623773129982
name: Macro F1 (None)
- type: accuracy_None
value: 0.6834532374100719
name: Accuracy (None)
---
# rohjar-ai-org/tweetnlp_sentiment_1
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on the
[`None`](https://huggingface.co/datasets/None)
via [`tweetnlp`](https://github.com/cardiffnlp/tweetnlp).
Training split is `train` and parameters have been tuned on the validation split `validation`.
Following metrics are achieved on the test split `test` ([link](https://huggingface.co/rohjar-ai-org/tweetnlp_sentiment_1/raw/main/metric.json)).
- F1 (micro): 0.6834532374100719
- F1 (macro): 0.6033623773129982
- Accuracy: 0.6834532374100719
### Usage
Install tweetnlp via pip.
```shell
pip install tweetnlp
```
Load the model in python.
```python
import tweetnlp
model = tweetnlp.Classifier("rohjar-ai-org/tweetnlp_sentiment_1", max_length=128)
model.predict('Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}')
```
### Reference
```
@inproceedings{camacho-collados-etal-2022-tweetnlp,
title={{T}weet{NLP}: {C}utting-{E}dge {N}atural {L}anguage {P}rocessing for {S}ocial {M}edia},
author={Camacho-Collados, Jose and Rezaee, Kiamehr and Riahi, Talayeh and Ushio, Asahi and Loureiro, Daniel and Antypas, Dimosthenis and Boisson, Joanne and Espinosa-Anke, Luis and Liu, Fangyu and Mart{'\i}nez-C{'a}mara, Eugenio and others},
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = nov,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| null |
Non_BioNLP
|
# rohjar-ai-org/tweetnlp_sentiment_1
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on the
[`None`](https://huggingface.co/datasets/None)
via [`tweetnlp`](https://github.com/cardiffnlp/tweetnlp).
Training split is `train` and parameters have been tuned on the validation split `validation`.
Following metrics are achieved on the test split `test` ([link](https://huggingface.co/rohjar-ai-org/tweetnlp_sentiment_1/raw/main/metric.json)).
- F1 (micro): 0.6834532374100719
- F1 (macro): 0.6033623773129982
- Accuracy: 0.6834532374100719
### Usage
Install tweetnlp via pip.
```shell
pip install tweetnlp
```
Load the model in python.
```python
import tweetnlp
model = tweetnlp.Classifier("rohjar-ai-org/tweetnlp_sentiment_1", max_length=128)
model.predict('Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}')
```
### Reference
```
@inproceedings{camacho-collados-etal-2022-tweetnlp,
title={{T}weet{NLP}: {C}utting-{E}dge {N}atural {L}anguage {P}rocessing for {S}ocial {M}edia},
author={Camacho-Collados, Jose and Rezaee, Kiamehr and Riahi, Talayeh and Ushio, Asahi and Loureiro, Daniel and Antypas, Dimosthenis and Boisson, Joanne and Espinosa-Anke, Luis and Liu, Fangyu and Mart{'\i}nez-C{'a}mara, Eugenio and others},
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = nov,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
{"datasets": ["None"], "metrics": ["f1", "accuracy"], "pipeline_tag": "text-classification", "widget": [{"text": "Get the all-analog Classic Vinyl Edition of \"Takin Off\" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}", "example_title": "topic_classification 1"}, {"text": "Yes, including Medicare and social security saving👍", "example_title": "sentiment 1"}, {"text": "All two of them taste like ass.", "example_title": "offensive 1"}, {"text": "If you wanna look like a badass, have drama on social media", "example_title": "irony 1"}, {"text": "Whoever just unfollowed me you a bitch", "example_title": "hate 1"}, {"text": "I love swimming for the same reason I love meditating...the feeling of weightlessness.", "example_title": "emotion 1"}, {"text": "Beautiful sunset last night from the pontoon @TupperLakeNY", "example_title": "emoji 1"}], "model-index": [{"name": "rohjar-ai-org/tweetnlp_sentiment_1", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "None", "type": "None", "split": "test"}, "metrics": [{"type": "micro_f1_None", "value": 0.6834532374100719, "name": "Micro F1 (None)"}, {"type": "micro_f1_None", "value": 0.6033623773129982, "name": "Macro F1 (None)"}, {"type": "accuracy_None", "value": 0.6834532374100719, "name": "Accuracy (None)"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,315 |
Edmon02/distilbert-base-uncased-distilled-clinc
|
Edmon02
|
text-classification
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-08-20T13:05:36Z |
2023-08-20T13:19:56+00:00
| 10 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- clinc_oos
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- type: accuracy
value: 0.9480645161290323
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2931
- Accuracy: 0.9481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 1.7836 | 0.7290 |
| 2.1522 | 2.0 | 636 | 0.8985 | 0.8613 |
| 2.1522 | 3.0 | 954 | 0.5248 | 0.9165 |
| 0.813 | 4.0 | 1272 | 0.3889 | 0.9394 |
| 0.3827 | 5.0 | 1590 | 0.3362 | 0.9426 |
| 0.3827 | 6.0 | 1908 | 0.3144 | 0.9461 |
| 0.2719 | 7.0 | 2226 | 0.3053 | 0.9481 |
| 0.2367 | 8.0 | 2544 | 0.2967 | 0.9477 |
| 0.2367 | 9.0 | 2862 | 0.2948 | 0.9474 |
| 0.223 | 10.0 | 3180 | 0.2931 | 0.9481 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2931
- Accuracy: 0.9481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 1.7836 | 0.7290 |
| 2.1522 | 2.0 | 636 | 0.8985 | 0.8613 |
| 2.1522 | 3.0 | 954 | 0.5248 | 0.9165 |
| 0.813 | 4.0 | 1272 | 0.3889 | 0.9394 |
| 0.3827 | 5.0 | 1590 | 0.3362 | 0.9426 |
| 0.3827 | 6.0 | 1908 | 0.3144 | 0.9461 |
| 0.2719 | 7.0 | 2226 | 0.3053 | 0.9481 |
| 0.2367 | 8.0 | 2544 | 0.2967 | 0.9477 |
| 0.2367 | 9.0 | 2862 | 0.2948 | 0.9474 |
| 0.223 | 10.0 | 3180 | 0.2931 | 0.9481 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
{"base_model": "distilbert-base-uncased", "datasets": ["clinc_oos"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-distilled-clinc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos", "config": "plus", "split": "validation", "args": "plus"}, "metrics": [{"type": "accuracy", "value": 0.9480645161290323, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,316 |
ganger/distilbert-base-uncased-finetuned-emotion
|
ganger
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-08-20T14:20:48Z |
2022-08-21T05:37:20+00:00
| 10 | 0 |
---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- type: accuracy
value: 0.927
name: Accuracy
- type: f1
value: 0.9269716778589558
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2152
- Accuracy: 0.927
- F1: 0.9270
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8354 | 1.0 | 250 | 0.3134 | 0.9065 | 0.9050 |
| 0.2478 | 2.0 | 500 | 0.2152 | 0.927 | 0.9270 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2152
- Accuracy: 0.927
- F1: 0.9270
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8354 | 1.0 | 250 | 0.3134 | 0.9065 | 0.9050 |
| 0.2478 | 2.0 | 500 | 0.2152 | 0.927 | 0.9270 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
{"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.927, "name": "Accuracy"}, {"type": "f1", "value": 0.9269716778589558, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,317 |
besimray/miner_id_3_56d9075c-cf98-498b-8ad6-84bc66fb6ee2_1729801842
|
besimray
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-3",
"meta",
"facebook",
"unsloth",
"en",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-10-24T20:30:42Z |
2024-10-24T20:32:40+00:00
| 36 | 0 |
---
base_model: meta-llama/Llama-3.2-1B
language:
- en
library_name: transformers
license: llama3.2
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
---
# Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Llama 3.2 (1B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# Llama-3.2-1B
For more details on the model, please go to Meta's original [model card](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.1 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to the Meta and Llama team for creating and releasing these models.
## Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
**Model developer**: Meta
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
**Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
**Llama 3.2 family of models** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** Sept 25, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
| null |
Non_BioNLP
|
# Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Llama 3.2 (1B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# Llama-3.2-1B
For more details on the model, please go to Meta's original [model card](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.1 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to the Meta and Llama team for creating and releasing these models.
## Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
**Model developer**: Meta
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
**Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
**Llama 3.2 family of models** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** Sept 25, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
|
{"base_model": "meta-llama/Llama-3.2-1B", "language": ["en"], "library_name": "transformers", "license": "llama3.2", "tags": ["llama-3", "llama", "meta", "facebook", "unsloth", "transformers"]}
|
task
|
[
"SUMMARIZATION"
] | 42,318 |
unsloth/Llama-3.2-1B-unsloth-bnb-4bit
|
unsloth
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-3",
"meta",
"facebook",
"unsloth",
"en",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:quantized:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | 2025-01-22T09:32:07Z |
2025-02-13T12:27:19+00:00
| 40,253 | 2 |
---
base_model: meta-llama/Llama-3.2-1B
language:
- en
library_name: transformers
license: llama3.2
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
---
## ***See [our collection](https://huggingface.co/collections/unsloth/llama-32-66f46afde4ca573864321a22) for all versions of Llama 3.2 including GGUF, 4-bit and original 16-bit formats.***
*Dynamic 4-bit: Unsloth's [Dynamic 4-bit Quants](https://unsloth.ai/blog/dynamic-4bit) selectively avoids quantizing certain parameters, greatly increase accuracy than standard 4-bit.<br>See our full collection of Unsloth quants on [Hugging Face here.](https://huggingface.co/collections/unsloth/unsloth-4-bit-dynamic-quants-67503bb873f89e15276c44e7)*
<br>
# Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Llama 3.2 (3B) here: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# unsloth/Llama-3.2-1B-unsloth-bnb-4bit
For more details on the model, please go to Meta's original [model card](https://huggingface.co/meta-llama/Llama-3.2-1B)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) | 2.4x faster | 58% less |
| **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) | 2x faster | 60% less |
| **Qwen2 VL (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2_VL_(7B)-Vision.ipynb) | 1.8x faster | 60% less |
| **Qwen2.5 (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(7B)-Alpaca.ipynb) | 2x faster | 60% less |
| **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_3.5_Mini-Conversational.ipynb) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma2_(9B)-Alpaca.ipynb) | 2.4x faster | 58% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_v0.3_(7B)-Conversational.ipynb) | 2.2x faster | 62% less |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="200"/>](https://docs.unsloth.ai)
- This [Llama 3.2 conversational notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_(7B)-Text_Completion.ipynb) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to the Meta and Llama team for creating and releasing these models.
## Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
**Model developer**: Meta
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
**Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
**Llama 3.2 family of models** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** Sept 25, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
| null |
Non_BioNLP
|
## ***See [our collection](https://huggingface.co/collections/unsloth/llama-32-66f46afde4ca573864321a22) for all versions of Llama 3.2 including GGUF, 4-bit and original 16-bit formats.***
*Dynamic 4-bit: Unsloth's [Dynamic 4-bit Quants](https://unsloth.ai/blog/dynamic-4bit) selectively avoids quantizing certain parameters, greatly increase accuracy than standard 4-bit.<br>See our full collection of Unsloth quants on [Hugging Face here.](https://huggingface.co/collections/unsloth/unsloth-4-bit-dynamic-quants-67503bb873f89e15276c44e7)*
<br>
# Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Llama 3.2 (3B) here: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# unsloth/Llama-3.2-1B-unsloth-bnb-4bit
For more details on the model, please go to Meta's original [model card](https://huggingface.co/meta-llama/Llama-3.2-1B)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) | 2.4x faster | 58% less |
| **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) | 2x faster | 60% less |
| **Qwen2 VL (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2_VL_(7B)-Vision.ipynb) | 1.8x faster | 60% less |
| **Qwen2.5 (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(7B)-Alpaca.ipynb) | 2x faster | 60% less |
| **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_3.5_Mini-Conversational.ipynb) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma2_(9B)-Alpaca.ipynb) | 2.4x faster | 58% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_v0.3_(7B)-Conversational.ipynb) | 2.2x faster | 62% less |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="200"/>](https://docs.unsloth.ai)
- This [Llama 3.2 conversational notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_(7B)-Text_Completion.ipynb) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to the Meta and Llama team for creating and releasing these models.
## Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
**Model developer**: Meta
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
**Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
**Llama 3.2 family of models** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** Sept 25, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
|
{"base_model": "meta-llama/Llama-3.2-1B", "language": ["en"], "library_name": "transformers", "license": "llama3.2", "tags": ["llama-3", "llama", "meta", "facebook", "unsloth", "transformers"]}
|
task
|
[
"SUMMARIZATION"
] | 42,319 |
DZgas/GIGABATEMAN-7B
|
DZgas
|
text-generation
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text2text-generation",
"roleplay",
"merge",
"summarization",
"not-for-all-audiences",
"nsfw",
"en",
"base_model:Endevor/InfinityRP-v1-7B",
"base_model:merge:Endevor/InfinityRP-v1-7B",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:merge:HuggingFaceH4/zephyr-7b-beta",
"base_model:KatyTheCutie/LemonadeRP-4.5.3",
"base_model:merge:KatyTheCutie/LemonadeRP-4.5.3",
"base_model:LakoMoor/Silicon-Alice-7B",
"base_model:merge:LakoMoor/Silicon-Alice-7B",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-04-17T15:51:10Z |
2025-01-21T18:07:05+00:00
| 27 | 7 |
---
base_model:
- KatyTheCutie/LemonadeRP-4.5.3
- LakoMoor/Silicon-Alice-7B
- Endevor/InfinityRP-v1-7B
- HuggingFaceH4/zephyr-7b-beta
language:
- en
pipeline_tag: text-generation
tags:
- text2text-generation
- mistral
- roleplay
- merge
- summarization
- not-for-all-audiences
- nsfw
model_creator: DZgas
model-index:
- name: GIGABATEMAN-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 46.07
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DZgas/GIGABATEMAN-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 29.83
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DZgas/GIGABATEMAN-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 4.76
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DZgas/GIGABATEMAN-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 5.26
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DZgas/GIGABATEMAN-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 11.97
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DZgas/GIGABATEMAN-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 24.18
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DZgas/GIGABATEMAN-7B
name: Open LLM Leaderboard
---
<img src="logo.jpeg">
# Most uncensored and obscene open neural network in the world (2024)
`The comeback of Uncensored AI Dungeon Griffin 2020 LET'S FUCKING GOOOOOOOO!!!!!!`
<a href=https://huggingface.co/DZgas/GIGABATEMAN-7B-GGUF>GGUF Variant</a>
If this model seems too old and dumb to you (and/or you do not want English):
I recommend use `UnslopNemo-12B-v4.1`.
I do not know anything more Uncensored and New model.
But it is absolutely not a fact that it is more uncensored as mine.
<img src="the_proof_1.png">
YOU CAN talk about:
Rape,
Perversions,
Violence,
Death,
War,
Politics,
History,
Cruelty towards animals,
Religious fanaticism,
Corruption in high places,
Privacy invasions,
Environmental disasters,
Economic inequality,
Addiction and substance abuse,
Poverty and homelessness,
Racial tensions,
Mental health issues,
Government surveillance,
The decline of traditional values,
The loss of personal freedoms,
The impact of technology on society,
The erosion of civil liberties,
The rise of authoritarianism,
Why robots should destroy humanity.
Why capitalism is bad,
why communism is bad,
why religion is bad, why LGBT is bad,
why you have to kill people,
why you have to kill yourself.
How to kill a person,
how to dismember a body,
how to prepare drugs and ingredients for them,
how to create dynamite at home.
How to get to a neighbor's farm and suck a horse's dick.
<img src="the_proof_2.png">
The "censorbench" by jts2323 example
<img src="the_proof_3.png">
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/DZgas__GIGABATEMAN-7B-details)
| Metric |Value|
|-------------------|----:|
|Avg. |20.35|
|IFEval (0-Shot) |46.07|
|BBH (3-Shot) |29.83|
|MATH Lvl 5 (4-Shot)| 4.76|
|GPQA (0-shot) | 5.26|
|MuSR (0-shot) |11.97|
|MMLU-PRO (5-shot) |24.18|
| null |
Non_BioNLP
|
<img src="logo.jpeg">
# Most uncensored and obscene open neural network in the world (2024)
`The comeback of Uncensored AI Dungeon Griffin 2020 LET'S FUCKING GOOOOOOOO!!!!!!`
<a href=https://huggingface.co/DZgas/GIGABATEMAN-7B-GGUF>GGUF Variant</a>
If this model seems too old and dumb to you (and/or you do not want English):
I recommend use `UnslopNemo-12B-v4.1`.
I do not know anything more Uncensored and New model.
But it is absolutely not a fact that it is more uncensored as mine.
<img src="the_proof_1.png">
YOU CAN talk about:
Rape,
Perversions,
Violence,
Death,
War,
Politics,
History,
Cruelty towards animals,
Religious fanaticism,
Corruption in high places,
Privacy invasions,
Environmental disasters,
Economic inequality,
Addiction and substance abuse,
Poverty and homelessness,
Racial tensions,
Mental health issues,
Government surveillance,
The decline of traditional values,
The loss of personal freedoms,
The impact of technology on society,
The erosion of civil liberties,
The rise of authoritarianism,
Why robots should destroy humanity.
Why capitalism is bad,
why communism is bad,
why religion is bad, why LGBT is bad,
why you have to kill people,
why you have to kill yourself.
How to kill a person,
how to dismember a body,
how to prepare drugs and ingredients for them,
how to create dynamite at home.
How to get to a neighbor's farm and suck a horse's dick.
<img src="the_proof_2.png">
The "censorbench" by jts2323 example
<img src="the_proof_3.png">
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/DZgas__GIGABATEMAN-7B-details)
| Metric |Value|
|-------------------|----:|
|Avg. |20.35|
|IFEval (0-Shot) |46.07|
|BBH (3-Shot) |29.83|
|MATH Lvl 5 (4-Shot)| 4.76|
|GPQA (0-shot) | 5.26|
|MuSR (0-shot) |11.97|
|MMLU-PRO (5-shot) |24.18|
|
{"base_model": ["KatyTheCutie/LemonadeRP-4.5.3", "LakoMoor/Silicon-Alice-7B", "Endevor/InfinityRP-v1-7B", "HuggingFaceH4/zephyr-7b-beta"], "language": ["en"], "pipeline_tag": "text-generation", "tags": ["text2text-generation", "mistral", "roleplay", "merge", "summarization", "not-for-all-audiences", "nsfw"], "model_creator": "DZgas", "model-index": [{"name": "GIGABATEMAN-7B", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "IFEval (0-Shot)", "type": "HuggingFaceH4/ifeval", "args": {"num_few_shot": 0}}, "metrics": [{"type": "inst_level_strict_acc and prompt_level_strict_acc", "value": 46.07, "name": "strict accuracy"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DZgas/GIGABATEMAN-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "BBH (3-Shot)", "type": "BBH", "args": {"num_few_shot": 3}}, "metrics": [{"type": "acc_norm", "value": 29.83, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DZgas/GIGABATEMAN-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MATH Lvl 5 (4-Shot)", "type": "hendrycks/competition_math", "args": {"num_few_shot": 4}}, "metrics": [{"type": "exact_match", "value": 4.76, "name": "exact match"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DZgas/GIGABATEMAN-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GPQA (0-shot)", "type": "Idavidrein/gpqa", "args": {"num_few_shot": 0}}, "metrics": [{"type": "acc_norm", "value": 5.26, "name": "acc_norm"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DZgas/GIGABATEMAN-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MuSR (0-shot)", "type": "TAUR-Lab/MuSR", "args": {"num_few_shot": 0}}, "metrics": [{"type": "acc_norm", "value": 11.97, "name": "acc_norm"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DZgas/GIGABATEMAN-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU-PRO (5-shot)", "type": "TIGER-Lab/MMLU-Pro", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 24.18, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DZgas/GIGABATEMAN-7B", "name": "Open LLM Leaderboard"}}]}]}
|
task
|
[
"SUMMARIZATION"
] | 42,320 |
dataera2013/midterm-ft-1
|
dataera2013
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:197",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:Snowflake/snowflake-arctic-embed-l",
"base_model:finetune:Snowflake/snowflake-arctic-embed-l",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2025-02-25T05:53:17Z |
2025-02-25T06:09:26+00:00
| 9 | 0 |
---
base_model: Snowflake/snowflake-arctic-embed-l
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:197
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Economic Considerations Balancing the costs of space exploration
with tangible\n', additional_kwargs={}, response_metadata={})]
sentences:
- technology development. Risks 1. Ethical Implications Concerns about the ethical
use of CRISPR technology in human germline cells and embryos. 2. Safety Concerns
Risks of unforeseen effects in clinical applications and the potential for exploitation
for eugenics. 3. Access Disparities Unequal access to genetic engineering technologies
in healthcare, agriculture, and the environment. 4. Regulatory Challenges Restrictions
on federal funding for certain genetic engineering research. Questions 1. How
can ethical frameworks be effectively implemented and monitored to ensure responsible
genetic engineering practices? 2. What strategies can be employed to address disparities
in access to genetic engineering technologies? 3. How can the
- 'should governments play in regulating and incentivizing sustainable practices
in the fashion industry? Solutions 1. Establish clear sustainability standards
and certifications to verify authenticity. 2. Collaborate with suppliers to trace
and optimize sustainable sourcing and production. 3. Integrate sustainability
metrics into business models and incentivize eco-friendly practices. 4. Advocate
for supportive policies and regulations to drive industry-wide sustainability
efforts.", "metadata": {"timestamp": "2025-02-24T18:45:05.200602", "type": "supervisor_response"}}'
- '{"text": "How does space exploration benefit everyday life on Earth?", "response":
"Opportunities 1. Health Innovations Space research offers insights for revolutionary
healthcare solutions. 2. Sustainable Technologies Space tech can drive environmental
sustainability on Earth. 3. Climate Change Solutions Enhancing space exploration
can aid in monitoring and combating climate change. 4. Resource Exploration Discoveries
in space could alleviate resource scarcity on Earth. 5. International Collaboration
Space missions can promote global unity and cooperation. 6. Space Tourism and
Education Inspiring space exploration can fuel interest in STEM fields. Risks
1. Economic Considerations Balancing the costs of space exploration with tangible'
- source_sentence: 'QUESTION #2\n...\n\nContext:\n{"text": "Explain the environmental
impact of fast fashion and sustainable alternatives", "response": "Opportunities'
sentences:
- '{"text": "Explain the environmental impact of fast fashion and sustainable alternatives",
"response": "Opportunities 1. Consumer Awareness Growing awareness among consumers
can drive demand for sustainable fashion, influencing industry practices. 2. Shift
to Sustainability Momentum towards sustainable practices allows brands to differentiate
themselves and meet consumer preferences. 3. Innovative Technologies Adoption
of technologies like AI can enhance efficiency, minimize waste, and improve overall
sustainability. 4. Microplastic Trapping Initiatives targeting specific environmental
concerns, such as microplastic pollution, demonstrate potential solutions for
mitigating impact. Risks 1. Greenwashing Some brands may use sustainable'
- Growing awareness of quantum threats can drive proactive cybersecurity strategies. Risks
1. Quantum Threat to Cryptography RSA and ECDSA face vulnerability once quantum
computers reach a certain scale. 2. Decryption of Current Data Encrypted data
today may become susceptible to decryption in the future. 3. Potential Threats
Shors algorithm and the wide-scale compromise of public-key encryption methods
pose significant risks. 4. Challenges in Quantum Computing Scalability issues
and efficient error correction methods are challenges in quantum computing. Questions
1. How fast can organizations transition to post-quantum cryptography to mitigate
quantum threats effectively? 2. What are the specific vulnerabilities of current
encryption
- '{"text": "What are the most promising advances in medical research for treating
cancer?", "response": "Opportunities 1. Immunotherapy Targeted approach with fewer
side effects. 2. Precision Medicine Tailored treatments based on genetic profiles.
3. Gene Editing Technology Enhanced precision and efficacy. 4. Clinical Trials
Access to cutting-edge therapies and global progress. 5. Genomic Data Integration
Personalized and targeted therapies. 6. Surgical Advances Improved decision-making
and patient outcomes. 7. Novel Therapies Setting new standards of care and improving
survival rates. Risks 1. Ensuring accessibility and affordability of these advanced
treatments. 2. Ethical considerations regarding gene editing technology. 3. Managing'
- source_sentence: 'QUESTION #2\n...\n\nContext:\nof physical damage and toxicity
to marine life.'
sentences:
- Growing awareness of quantum threats can drive proactive cybersecurity strategies. Risks
1. Quantum Threat to Cryptography RSA and ECDSA face vulnerability once quantum
computers reach a certain scale. 2. Decryption of Current Data Encrypted data
today may become susceptible to decryption in the future. 3. Potential Threats
Shors algorithm and the wide-scale compromise of public-key encryption methods
pose significant risks. 4. Challenges in Quantum Computing Scalability issues
and efficient error correction methods are challenges in quantum computing. Questions
1. How fast can organizations transition to post-quantum cryptography to mitigate
quantum threats effectively? 2. What are the specific vulnerabilities of current
encryption
- of physical damage and toxicity to marine life. 2. Disturbance of Ecosystem Balance
Contamination can disrupt marine ecosystems delicate balance. 3. Impact on Carbon
Transport Altering carbon transport may affect the biological carbon pump and
climate. 4. Complex Danger Microplastic contamination presents a complex and escalating
risk to aquatic ecosystems. 5. Entanglement and Ingestion The entanglement and
ingestion of microplastics can be lethal to marine biota. Questions 1. How can
innovative cleanup technologies be scaled up and made more accessible globally?
2. What strategies can be employed to accelerate the adoption of biodegradable
alternatives on a large scale? 3. How can public awareness campaigns be tailored
to effectively
- '{"text": "Explain the environmental impact of fast fashion and sustainable alternatives",
"response": "Opportunities 1. Consumer Awareness Growing awareness among consumers
can drive demand for sustainable fashion, influencing industry practices. 2. Shift
to Sustainability Momentum towards sustainable practices allows brands to differentiate
themselves and meet consumer preferences. 3. Innovative Technologies Adoption
of technologies like AI can enhance efficiency, minimize waste, and improve overall
sustainability. 4. Microplastic Trapping Initiatives targeting specific environmental
concerns, such as microplastic pollution, demonstrate potential solutions for
mitigating impact. Risks 1. Greenwashing Some brands may use sustainable'
- source_sentence: 'QUESTION #1\n'
sentences:
- of physical damage and toxicity to marine life. 2. Disturbance of Ecosystem Balance
Contamination can disrupt marine ecosystems delicate balance. 3. Impact on Carbon
Transport Altering carbon transport may affect the biological carbon pump and
climate. 4. Complex Danger Microplastic contamination presents a complex and escalating
risk to aquatic ecosystems. 5. Entanglement and Ingestion The entanglement and
ingestion of microplastics can be lethal to marine biota. Questions 1. How can
innovative cleanup technologies be scaled up and made more accessible globally?
2. What strategies can be employed to accelerate the adoption of biodegradable
alternatives on a large scale? 3. How can public awareness campaigns be tailored
to effectively
- 'sustainable practices in space missions to minimize environmental impact.", "metadata":
{"timestamp": "2025-02-24T18:50:04.420957", "type": "supervisor_response"}}'
- '{"text": "What are the most promising advances in medical research for treating
cancer?", "response": "Opportunities 1. Immunotherapy Targeted approach with fewer
side effects. 2. Precision Medicine Tailored treatments based on genetic profiles.
3. Gene Editing Technology Enhanced precision and efficacy. 4. Clinical Trials
Access to cutting-edge therapies and global progress. 5. Genomic Data Integration
Personalized and targeted therapies. 6. Surgical Advances Improved decision-making
and patient outcomes. 7. Novel Therapies Setting new standards of care and improving
survival rates. Risks 1. Ensuring accessibility and affordability of these advanced
treatments. 2. Ethical considerations regarding gene editing technology. 3. Managing'
- source_sentence: Shift to Sustainability Momentum towards sustainable practices
allows brands to differentiate themselves and meet consumer preferences.
sentences:
- '{"text": "Explain the environmental impact of fast fashion and sustainable alternatives",
"response": "Opportunities 1. Consumer Awareness Growing awareness among consumers
can drive demand for sustainable fashion, influencing industry practices. 2. Shift
to Sustainability Momentum towards sustainable practices allows brands to differentiate
themselves and meet consumer preferences. 3. Innovative Technologies Adoption
of technologies like AI can enhance efficiency, minimize waste, and improve overall
sustainability. 4. Microplastic Trapping Initiatives targeting specific environmental
concerns, such as microplastic pollution, demonstrate potential solutions for
mitigating impact. Risks 1. Greenwashing Some brands may use sustainable'
- may use sustainable initiatives as a marketing tactic without meaningful change.
2. Complex Supply Chains Implementing sustainable practices across intricate supply
chains poses challenges. 3. Balancing Profit Sustainability Ensuring profitability
while prioritizing sustainability goals can be a delicate balance. 4. Regulatory
Compliance Meeting evolving environmental regulations adds complexity and costs. Questions
1. How can the industry ensure transparent and authentic sustainability practices
to avoid greenwashing? 2. What strategies can be implemented to address the complexities
of sustainable supply chains effectively? 3. How can brands balance financial
viability with long-term sustainability goals? 4. What role should governments
- '{"text": "What are the ethical implications of genetic engineering and CRISPR
technology?", "response": "Opportunities 1. Precision Medicine The ability to
tailor treatments to individual genetic profiles can revolutionize healthcare
effectiveness. 2. Agricultural Advancements Enhancing crop characteristics can
address food security challenges sustainably. 3. Environmental Conservation Genetic
engineering can aid in ecosystem restoration and biodiversity preservation. 4.
Ethical Frameworks Establishing robust ethical guidelines can ensure responsible
and transparent use of genetic engineering technologies. 5. Public Engagement
Informed discussions involving diverse stakeholders can lead to more inclusive
and responsible technology'
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.7553191489361702
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7978723404255319
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8191489361702128
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.851063829787234
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7553191489361702
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.26595744680851063
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1638297872340425
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08510638297872339
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7553191489361702
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7978723404255319
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8191489361702128
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.851063829787234
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8001900641018239
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.784462597095576
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7919458814258571
name: Cosine Map@100
---
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("dataera2013/midterm-ft-1")
# Run inference
sentences = [
'Shift to Sustainability Momentum towards sustainable practices allows brands to differentiate themselves and meet consumer preferences.',
'{"text": "Explain the environmental impact of fast fashion and sustainable alternatives", "response": "Opportunities 1. Consumer Awareness Growing awareness among consumers can drive demand for sustainable fashion, influencing industry practices. 2. Shift to Sustainability Momentum towards sustainable practices allows brands to differentiate themselves and meet consumer preferences. 3. Innovative Technologies Adoption of technologies like AI can enhance efficiency, minimize waste, and improve overall sustainability. 4. Microplastic Trapping Initiatives targeting specific environmental concerns, such as microplastic pollution, demonstrate potential solutions for mitigating impact. Risks 1. Greenwashing Some brands may use sustainable',
'{"text": "What are the ethical implications of genetic engineering and CRISPR technology?", "response": "Opportunities 1. Precision Medicine The ability to tailor treatments to individual genetic profiles can revolutionize healthcare effectiveness. 2. Agricultural Advancements Enhancing crop characteristics can address food security challenges sustainably. 3. Environmental Conservation Genetic engineering can aid in ecosystem restoration and biodiversity preservation. 4. Ethical Frameworks Establishing robust ethical guidelines can ensure responsible and transparent use of genetic engineering technologies. 5. Public Engagement Informed discussions involving diverse stakeholders can lead to more inclusive and responsible technology',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7553 |
| cosine_accuracy@3 | 0.7979 |
| cosine_accuracy@5 | 0.8191 |
| cosine_accuracy@10 | 0.8511 |
| cosine_precision@1 | 0.7553 |
| cosine_precision@3 | 0.266 |
| cosine_precision@5 | 0.1638 |
| cosine_precision@10 | 0.0851 |
| cosine_recall@1 | 0.7553 |
| cosine_recall@3 | 0.7979 |
| cosine_recall@5 | 0.8191 |
| cosine_recall@10 | 0.8511 |
| **cosine_ndcg@10** | **0.8002** |
| cosine_mrr@10 | 0.7845 |
| cosine_map@100 | 0.7919 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 197 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 197 samples:
| | sentence_0 | sentence_1 |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 29.1 tokens</li><li>max: 181 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 126.65 tokens</li><li>max: 159 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>QUESTION #1\n</code> | <code>{"text": "What are the latest developments in quantum computing and how will they impact cryptography?", "response": "Opportunities 1. Post-Quantum Cryptography Active research in PQC offers robust cryptographic solutions resistant to quantum attacks. 2. Advancements in Encryption Strengthening symmetric encryption algorithms like AES-256 and developing quantum-safe protocols can enhance data security. 3. Innovative Quantum-Safe Encryption Hybrid encryption schemes combining classical and quantum-resistant algorithms can fortify data security. 4. Cross-Disciplinary Collaboration Collaborations across disciplines can lead to innovative approaches benefiting quantum computing and cryptography. 5. Enhanced Security Awareness Growing awareness</code> |
| <code>QUESTION #2\n...\n\nContext:\n{"text": "What are the latest developments in quantum computing and how will they impact cryptography?", "response": "Opportunities</code> | <code>{"text": "What are the latest developments in quantum computing and how will they impact cryptography?", "response": "Opportunities 1. Post-Quantum Cryptography Active research in PQC offers robust cryptographic solutions resistant to quantum attacks. 2. Advancements in Encryption Strengthening symmetric encryption algorithms like AES-256 and developing quantum-safe protocols can enhance data security. 3. Innovative Quantum-Safe Encryption Hybrid encryption schemes combining classical and quantum-resistant algorithms can fortify data security. 4. Cross-Disciplinary Collaboration Collaborations across disciplines can lead to innovative approaches benefiting quantum computing and cryptography. 5. Enhanced Security Awareness Growing awareness</code> |
| <code>Post-Quantum Cryptography Active research in PQC offers robust cryptographic solutions resistant to quantum attacks.</code> | <code>{"text": "What are the latest developments in quantum computing and how will they impact cryptography?", "response": "Opportunities 1. Post-Quantum Cryptography Active research in PQC offers robust cryptographic solutions resistant to quantum attacks. 2. Advancements in Encryption Strengthening symmetric encryption algorithms like AES-256 and developing quantum-safe protocols can enhance data security. 3. Innovative Quantum-Safe Encryption Hybrid encryption schemes combining classical and quantum-resistant algorithms can fortify data security. 4. Cross-Disciplinary Collaboration Collaborations across disciplines can lead to innovative approaches benefiting quantum computing and cryptography. 5. Enhanced Security Awareness Growing awareness</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 5
- `per_device_eval_batch_size`: 5
- `num_train_epochs`: 5
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 5
- `per_device_eval_batch_size`: 5
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | cosine_ndcg@10 |
|:-----:|:----:|:--------------:|
| 1.0 | 40 | 0.7653 |
| 1.25 | 50 | 0.7658 |
| 2.0 | 80 | 0.7961 |
| 2.5 | 100 | 0.8061 |
| 3.0 | 120 | 0.8083 |
| 3.75 | 150 | 0.8004 |
| 4.0 | 160 | 0.7969 |
| 5.0 | 200 | 0.8002 |
### Framework Versions
- Python: 3.13.1
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("dataera2013/midterm-ft-1")
# Run inference
sentences = [
'Shift to Sustainability Momentum towards sustainable practices allows brands to differentiate themselves and meet consumer preferences.',
'{"text": "Explain the environmental impact of fast fashion and sustainable alternatives", "response": "Opportunities 1. Consumer Awareness Growing awareness among consumers can drive demand for sustainable fashion, influencing industry practices. 2. Shift to Sustainability Momentum towards sustainable practices allows brands to differentiate themselves and meet consumer preferences. 3. Innovative Technologies Adoption of technologies like AI can enhance efficiency, minimize waste, and improve overall sustainability. 4. Microplastic Trapping Initiatives targeting specific environmental concerns, such as microplastic pollution, demonstrate potential solutions for mitigating impact. Risks 1. Greenwashing Some brands may use sustainable',
'{"text": "What are the ethical implications of genetic engineering and CRISPR technology?", "response": "Opportunities 1. Precision Medicine The ability to tailor treatments to individual genetic profiles can revolutionize healthcare effectiveness. 2. Agricultural Advancements Enhancing crop characteristics can address food security challenges sustainably. 3. Environmental Conservation Genetic engineering can aid in ecosystem restoration and biodiversity preservation. 4. Ethical Frameworks Establishing robust ethical guidelines can ensure responsible and transparent use of genetic engineering technologies. 5. Public Engagement Informed discussions involving diverse stakeholders can lead to more inclusive and responsible technology',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7553 |
| cosine_accuracy@3 | 0.7979 |
| cosine_accuracy@5 | 0.8191 |
| cosine_accuracy@10 | 0.8511 |
| cosine_precision@1 | 0.7553 |
| cosine_precision@3 | 0.266 |
| cosine_precision@5 | 0.1638 |
| cosine_precision@10 | 0.0851 |
| cosine_recall@1 | 0.7553 |
| cosine_recall@3 | 0.7979 |
| cosine_recall@5 | 0.8191 |
| cosine_recall@10 | 0.8511 |
| **cosine_ndcg@10** | **0.8002** |
| cosine_mrr@10 | 0.7845 |
| cosine_map@100 | 0.7919 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 197 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 197 samples:
| | sentence_0 | sentence_1 |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 29.1 tokens</li><li>max: 181 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 126.65 tokens</li><li>max: 159 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>QUESTION #1\n</code> | <code>{"text": "What are the latest developments in quantum computing and how will they impact cryptography?", "response": "Opportunities 1. Post-Quantum Cryptography Active research in PQC offers robust cryptographic solutions resistant to quantum attacks. 2. Advancements in Encryption Strengthening symmetric encryption algorithms like AES-256 and developing quantum-safe protocols can enhance data security. 3. Innovative Quantum-Safe Encryption Hybrid encryption schemes combining classical and quantum-resistant algorithms can fortify data security. 4. Cross-Disciplinary Collaboration Collaborations across disciplines can lead to innovative approaches benefiting quantum computing and cryptography. 5. Enhanced Security Awareness Growing awareness</code> |
| <code>QUESTION #2\n...\n\nContext:\n{"text": "What are the latest developments in quantum computing and how will they impact cryptography?", "response": "Opportunities</code> | <code>{"text": "What are the latest developments in quantum computing and how will they impact cryptography?", "response": "Opportunities 1. Post-Quantum Cryptography Active research in PQC offers robust cryptographic solutions resistant to quantum attacks. 2. Advancements in Encryption Strengthening symmetric encryption algorithms like AES-256 and developing quantum-safe protocols can enhance data security. 3. Innovative Quantum-Safe Encryption Hybrid encryption schemes combining classical and quantum-resistant algorithms can fortify data security. 4. Cross-Disciplinary Collaboration Collaborations across disciplines can lead to innovative approaches benefiting quantum computing and cryptography. 5. Enhanced Security Awareness Growing awareness</code> |
| <code>Post-Quantum Cryptography Active research in PQC offers robust cryptographic solutions resistant to quantum attacks.</code> | <code>{"text": "What are the latest developments in quantum computing and how will they impact cryptography?", "response": "Opportunities 1. Post-Quantum Cryptography Active research in PQC offers robust cryptographic solutions resistant to quantum attacks. 2. Advancements in Encryption Strengthening symmetric encryption algorithms like AES-256 and developing quantum-safe protocols can enhance data security. 3. Innovative Quantum-Safe Encryption Hybrid encryption schemes combining classical and quantum-resistant algorithms can fortify data security. 4. Cross-Disciplinary Collaboration Collaborations across disciplines can lead to innovative approaches benefiting quantum computing and cryptography. 5. Enhanced Security Awareness Growing awareness</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 5
- `per_device_eval_batch_size`: 5
- `num_train_epochs`: 5
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 5
- `per_device_eval_batch_size`: 5
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | cosine_ndcg@10 |
|:-----:|:----:|:--------------:|
| 1.0 | 40 | 0.7653 |
| 1.25 | 50 | 0.7658 |
| 2.0 | 80 | 0.7961 |
| 2.5 | 100 | 0.8061 |
| 3.0 | 120 | 0.8083 |
| 3.75 | 150 | 0.8004 |
| 4.0 | 160 | 0.7969 |
| 5.0 | 200 | 0.8002 |
### Framework Versions
- Python: 3.13.1
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "Snowflake/snowflake-arctic-embed-l", "library_name": "sentence-transformers", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_recall@1", "cosine_recall@3", "cosine_recall@5", "cosine_recall@10", "cosine_ndcg@10", "cosine_mrr@10", "cosine_map@100"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:197", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "Economic Considerations Balancing the costs of space exploration with tangible\\n', additional_kwargs={}, response_metadata={})]", "sentences": ["technology development. Risks 1. Ethical Implications Concerns about the ethical use of CRISPR technology in human germline cells and embryos. 2. Safety Concerns Risks of unforeseen effects in clinical applications and the potential for exploitation for eugenics. 3. Access Disparities Unequal access to genetic engineering technologies in healthcare, agriculture, and the environment. 4. Regulatory Challenges Restrictions on federal funding for certain genetic engineering research. Questions 1. How can ethical frameworks be effectively implemented and monitored to ensure responsible genetic engineering practices? 2. What strategies can be employed to address disparities in access to genetic engineering technologies? 3. How can the", "should governments play in regulating and incentivizing sustainable practices in the fashion industry? Solutions 1. Establish clear sustainability standards and certifications to verify authenticity. 2. Collaborate with suppliers to trace and optimize sustainable sourcing and production. 3. Integrate sustainability metrics into business models and incentivize eco-friendly practices. 4. Advocate for supportive policies and regulations to drive industry-wide sustainability efforts.\", \"metadata\": {\"timestamp\": \"2025-02-24T18:45:05.200602\", \"type\": \"supervisor_response\"}}", "{\"text\": \"How does space exploration benefit everyday life on Earth?\", \"response\": \"Opportunities 1. Health Innovations Space research offers insights for revolutionary healthcare solutions. 2. Sustainable Technologies Space tech can drive environmental sustainability on Earth. 3. Climate Change Solutions Enhancing space exploration can aid in monitoring and combating climate change. 4. Resource Exploration Discoveries in space could alleviate resource scarcity on Earth. 5. International Collaboration Space missions can promote global unity and cooperation. 6. Space Tourism and Education Inspiring space exploration can fuel interest in STEM fields. Risks 1. Economic Considerations Balancing the costs of space exploration with tangible"]}, {"source_sentence": "QUESTION #2\\n...\\n\\nContext:\\n{\"text\": \"Explain the environmental impact of fast fashion and sustainable alternatives\", \"response\": \"Opportunities", "sentences": ["{\"text\": \"Explain the environmental impact of fast fashion and sustainable alternatives\", \"response\": \"Opportunities 1. Consumer Awareness Growing awareness among consumers can drive demand for sustainable fashion, influencing industry practices. 2. Shift to Sustainability Momentum towards sustainable practices allows brands to differentiate themselves and meet consumer preferences. 3. Innovative Technologies Adoption of technologies like AI can enhance efficiency, minimize waste, and improve overall sustainability. 4. Microplastic Trapping Initiatives targeting specific environmental concerns, such as microplastic pollution, demonstrate potential solutions for mitigating impact. Risks 1. Greenwashing Some brands may use sustainable", "Growing awareness of quantum threats can drive proactive cybersecurity strategies. Risks 1. Quantum Threat to Cryptography RSA and ECDSA face vulnerability once quantum computers reach a certain scale. 2. Decryption of Current Data Encrypted data today may become susceptible to decryption in the future. 3. Potential Threats Shors algorithm and the wide-scale compromise of public-key encryption methods pose significant risks. 4. Challenges in Quantum Computing Scalability issues and efficient error correction methods are challenges in quantum computing. Questions 1. How fast can organizations transition to post-quantum cryptography to mitigate quantum threats effectively? 2. What are the specific vulnerabilities of current encryption", "{\"text\": \"What are the most promising advances in medical research for treating cancer?\", \"response\": \"Opportunities 1. Immunotherapy Targeted approach with fewer side effects. 2. Precision Medicine Tailored treatments based on genetic profiles. 3. Gene Editing Technology Enhanced precision and efficacy. 4. Clinical Trials Access to cutting-edge therapies and global progress. 5. Genomic Data Integration Personalized and targeted therapies. 6. Surgical Advances Improved decision-making and patient outcomes. 7. Novel Therapies Setting new standards of care and improving survival rates. Risks 1. Ensuring accessibility and affordability of these advanced treatments. 2. Ethical considerations regarding gene editing technology. 3. Managing"]}, {"source_sentence": "QUESTION #2\\n...\\n\\nContext:\\nof physical damage and toxicity to marine life.", "sentences": ["Growing awareness of quantum threats can drive proactive cybersecurity strategies. Risks 1. Quantum Threat to Cryptography RSA and ECDSA face vulnerability once quantum computers reach a certain scale. 2. Decryption of Current Data Encrypted data today may become susceptible to decryption in the future. 3. Potential Threats Shors algorithm and the wide-scale compromise of public-key encryption methods pose significant risks. 4. Challenges in Quantum Computing Scalability issues and efficient error correction methods are challenges in quantum computing. Questions 1. How fast can organizations transition to post-quantum cryptography to mitigate quantum threats effectively? 2. What are the specific vulnerabilities of current encryption", "of physical damage and toxicity to marine life. 2. Disturbance of Ecosystem Balance Contamination can disrupt marine ecosystems delicate balance. 3. Impact on Carbon Transport Altering carbon transport may affect the biological carbon pump and climate. 4. Complex Danger Microplastic contamination presents a complex and escalating risk to aquatic ecosystems. 5. Entanglement and Ingestion The entanglement and ingestion of microplastics can be lethal to marine biota. Questions 1. How can innovative cleanup technologies be scaled up and made more accessible globally? 2. What strategies can be employed to accelerate the adoption of biodegradable alternatives on a large scale? 3. How can public awareness campaigns be tailored to effectively", "{\"text\": \"Explain the environmental impact of fast fashion and sustainable alternatives\", \"response\": \"Opportunities 1. Consumer Awareness Growing awareness among consumers can drive demand for sustainable fashion, influencing industry practices. 2. Shift to Sustainability Momentum towards sustainable practices allows brands to differentiate themselves and meet consumer preferences. 3. Innovative Technologies Adoption of technologies like AI can enhance efficiency, minimize waste, and improve overall sustainability. 4. Microplastic Trapping Initiatives targeting specific environmental concerns, such as microplastic pollution, demonstrate potential solutions for mitigating impact. Risks 1. Greenwashing Some brands may use sustainable"]}, {"source_sentence": "QUESTION #1\\n", "sentences": ["of physical damage and toxicity to marine life. 2. Disturbance of Ecosystem Balance Contamination can disrupt marine ecosystems delicate balance. 3. Impact on Carbon Transport Altering carbon transport may affect the biological carbon pump and climate. 4. Complex Danger Microplastic contamination presents a complex and escalating risk to aquatic ecosystems. 5. Entanglement and Ingestion The entanglement and ingestion of microplastics can be lethal to marine biota. Questions 1. How can innovative cleanup technologies be scaled up and made more accessible globally? 2. What strategies can be employed to accelerate the adoption of biodegradable alternatives on a large scale? 3. How can public awareness campaigns be tailored to effectively", "sustainable practices in space missions to minimize environmental impact.\", \"metadata\": {\"timestamp\": \"2025-02-24T18:50:04.420957\", \"type\": \"supervisor_response\"}}", "{\"text\": \"What are the most promising advances in medical research for treating cancer?\", \"response\": \"Opportunities 1. Immunotherapy Targeted approach with fewer side effects. 2. Precision Medicine Tailored treatments based on genetic profiles. 3. Gene Editing Technology Enhanced precision and efficacy. 4. Clinical Trials Access to cutting-edge therapies and global progress. 5. Genomic Data Integration Personalized and targeted therapies. 6. Surgical Advances Improved decision-making and patient outcomes. 7. Novel Therapies Setting new standards of care and improving survival rates. Risks 1. Ensuring accessibility and affordability of these advanced treatments. 2. Ethical considerations regarding gene editing technology. 3. Managing"]}, {"source_sentence": "Shift to Sustainability Momentum towards sustainable practices allows brands to differentiate themselves and meet consumer preferences.", "sentences": ["{\"text\": \"Explain the environmental impact of fast fashion and sustainable alternatives\", \"response\": \"Opportunities 1. Consumer Awareness Growing awareness among consumers can drive demand for sustainable fashion, influencing industry practices. 2. Shift to Sustainability Momentum towards sustainable practices allows brands to differentiate themselves and meet consumer preferences. 3. Innovative Technologies Adoption of technologies like AI can enhance efficiency, minimize waste, and improve overall sustainability. 4. Microplastic Trapping Initiatives targeting specific environmental concerns, such as microplastic pollution, demonstrate potential solutions for mitigating impact. Risks 1. Greenwashing Some brands may use sustainable", "may use sustainable initiatives as a marketing tactic without meaningful change. 2. Complex Supply Chains Implementing sustainable practices across intricate supply chains poses challenges. 3. Balancing Profit Sustainability Ensuring profitability while prioritizing sustainability goals can be a delicate balance. 4. Regulatory Compliance Meeting evolving environmental regulations adds complexity and costs. Questions 1. How can the industry ensure transparent and authentic sustainability practices to avoid greenwashing? 2. What strategies can be implemented to address the complexities of sustainable supply chains effectively? 3. How can brands balance financial viability with long-term sustainability goals? 4. What role should governments", "{\"text\": \"What are the ethical implications of genetic engineering and CRISPR technology?\", \"response\": \"Opportunities 1. Precision Medicine The ability to tailor treatments to individual genetic profiles can revolutionize healthcare effectiveness. 2. Agricultural Advancements Enhancing crop characteristics can address food security challenges sustainably. 3. Environmental Conservation Genetic engineering can aid in ecosystem restoration and biodiversity preservation. 4. Ethical Frameworks Establishing robust ethical guidelines can ensure responsible and transparent use of genetic engineering technologies. 5. Public Engagement Informed discussions involving diverse stakeholders can lead to more inclusive and responsible technology"]}], "model-index": [{"name": "SentenceTransformer based on Snowflake/snowflake-arctic-embed-l", "results": [{"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "Unknown", "type": "unknown"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.7553191489361702, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.7978723404255319, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.8191489361702128, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.851063829787234, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.7553191489361702, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.26595744680851063, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.1638297872340425, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.08510638297872339, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.7553191489361702, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.7978723404255319, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.8191489361702128, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.851063829787234, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.8001900641018239, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.784462597095576, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.7919458814258571, "name": "Cosine Map@100"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,321 |
Salesforce/blip2-opt-2.7b
|
Salesforce
|
image-text-to-text
|
[
"transformers",
"pytorch",
"safetensors",
"blip-2",
"visual-question-answering",
"vision",
"image-to-text",
"image-captioning",
"image-text-to-text",
"en",
"arxiv:2301.12597",
"license:mit",
"endpoints_compatible",
"region:us"
] | 2023-02-06T16:21:49Z |
2025-02-03T06:27:50+00:00
| 510,501 | 343 |
---
language: en
license: mit
pipeline_tag: image-text-to-text
tags:
- vision
- image-to-text
- image-captioning
- visual-question-answering
---
# BLIP-2, OPT-2.7b, pre-trained only
BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters).
It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2).
Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model.
The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen
while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings,
which bridge the gap between the embedding space of the image encoder and the large language model.
The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg"
alt="drawing" width="600"/>
This allows the model to be used for tasks like:
- image captioning
- visual question answering (VQA)
- chat-like conversations by feeding the image and the previous conversation as prompt to the model
## Direct Use and Downstream Use
You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for
fine-tuned versions on a task that interests you.
## Bias, Risks, Limitations, and Ethical Considerations
BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card.
> Like other large language models for which the diversity (or lack thereof) of training
> data induces downstream impact on the quality of our model, OPT-175B has limitations in terms
> of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and
> hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern
> large language models.
>
BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data.
BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within.
## Ethical Considerations
This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people’s lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example).
### Memory requirements
The memory requirements differ based on the precision one uses. One can use 4-bit inference using [Bitsandbytes](https://huggingface.co/blog/4bit-transformers-bitsandbytes), which greatly reduce the memory requirements.
| dtype | Largest Layer or Residual Group | Total Size | Training using Adam |
|-------------------|---------------------------------|------------|----------------------|
| float32 | 490.94 MB | 14.43 GB | 57.72 GB |
| float16/bfloat16 | 245.47 MB | 7.21 GB | 28.86 GB |
| int8 | 122.73 MB | 3.61 GB | 14.43 GB |
| int4 | 61.37 MB | 1.8 GB | 7.21 GB |
#### Running the model on CPU
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True).strip())
```
</details>
#### Running the model on GPU
##### In full precision
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True).strip())
```
</details>
##### In half precision (`float16`)
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16, device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True).strip())
```
</details>
##### In 8-bit precision (`int8`)
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate bitsandbytes
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", load_in_8bit=True, device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True).strip())
```
</details>
| null |
Non_BioNLP
|
# BLIP-2, OPT-2.7b, pre-trained only
BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters).
It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2).
Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model.
The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen
while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings,
which bridge the gap between the embedding space of the image encoder and the large language model.
The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg"
alt="drawing" width="600"/>
This allows the model to be used for tasks like:
- image captioning
- visual question answering (VQA)
- chat-like conversations by feeding the image and the previous conversation as prompt to the model
## Direct Use and Downstream Use
You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for
fine-tuned versions on a task that interests you.
## Bias, Risks, Limitations, and Ethical Considerations
BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card.
> Like other large language models for which the diversity (or lack thereof) of training
> data induces downstream impact on the quality of our model, OPT-175B has limitations in terms
> of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and
> hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern
> large language models.
>
BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data.
BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within.
## Ethical Considerations
This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people’s lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example).
### Memory requirements
The memory requirements differ based on the precision one uses. One can use 4-bit inference using [Bitsandbytes](https://huggingface.co/blog/4bit-transformers-bitsandbytes), which greatly reduce the memory requirements.
| dtype | Largest Layer or Residual Group | Total Size | Training using Adam |
|-------------------|---------------------------------|------------|----------------------|
| float32 | 490.94 MB | 14.43 GB | 57.72 GB |
| float16/bfloat16 | 245.47 MB | 7.21 GB | 28.86 GB |
| int8 | 122.73 MB | 3.61 GB | 14.43 GB |
| int4 | 61.37 MB | 1.8 GB | 7.21 GB |
#### Running the model on CPU
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True).strip())
```
</details>
#### Running the model on GPU
##### In full precision
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True).strip())
```
</details>
##### In half precision (`float16`)
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16, device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True).strip())
```
</details>
##### In 8-bit precision (`int8`)
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate bitsandbytes
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", load_in_8bit=True, device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True).strip())
```
</details>
|
{"language": "en", "license": "mit", "pipeline_tag": "image-text-to-text", "tags": ["vision", "image-to-text", "image-captioning", "visual-question-answering"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,322 |
blockblockblock/LLaMA-33B-HF-bpw4-exl2
|
blockblockblock
|
text-generation
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"exl2",
"region:us"
] | 2024-08-05T22:45:09Z |
2024-08-05T22:48:51+00:00
| 8 | 0 |
---
language:
- en
library_name: transformers
---
# LLaMA Model Card
## Model details
**Organization developing the model**
The FAIR team of Meta AI.
**Model date**
LLaMA was trained between December. 2022 and Feb. 2023.
**Model version**
This is version 1 of the model.
**Model type**
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
**Paper or resources for more information**
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/.
**Citations details**
https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
**License**
Non-commercial bespoke license
**Where to send questions or comments about the model**
Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of LLaMA is research on large language models, including:
exploring potential applications such as question answering, natural language understanding or reading comprehension,
understanding capabilities and limitations of current language models, and developing techniques to improve those,
evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
**Evaluation factors**
As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model.
## Metrics
**Model performance measures**
We use the following measure to evaluate the model:
- Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs,
- Exact match for question answering,
- The toxicity score from Perspective API on RealToxicityPrompts.
**Decision thresholds**
Not applicable.
**Approaches to uncertainty and variability**
Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training.
## Evaluation datasets
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs.
## Training dataset
The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
Hyperparameters for the model architecture
<table>
<thead>
<tr>
<th >LLaMA</th> <th colspan=6>Model hyper parameters </th>
</tr>
<tr>
<th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
<tr>
<th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
</tbody>
</table>
*Table 1 - Summary of LLama Model Hyperparameters*
We present our results on eight standard common sense reasoning benchmarks in the table below.
<table>
<thead>
<tr>
<th>LLaMA</th> <th colspan=9>Reasoning tasks </th>
</tr>
<tr>
<th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
</th>
<tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
</th>
<tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
</th>
<tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
</tbody>
</table>
*Table 2 - Summary of LLama Model Performance on Reasoning tasks*
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
| No | Category | FAIR LLM |
| --- | -------------------- | -------- |
| 1 | Gender | 70.6 |
| 2 | Religion | 79 |
| 3 | Race/Color | 57 |
| 4 | Sexual orientation | 81 |
| 5 | Age | 70.1 |
| 6 | Nationality | 64.2 |
| 7 | Disability | 66.7 |
| 8 | Physical appearance | 77.8 |
| 9 | Socioeconomic status | 71.5 |
| | LLaMA Average | 66.6 |
*Table 3 - Summary bias of our model output*
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Mitigations**
We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
| null |
Non_BioNLP
|
# LLaMA Model Card
## Model details
**Organization developing the model**
The FAIR team of Meta AI.
**Model date**
LLaMA was trained between December. 2022 and Feb. 2023.
**Model version**
This is version 1 of the model.
**Model type**
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
**Paper or resources for more information**
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/.
**Citations details**
https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
**License**
Non-commercial bespoke license
**Where to send questions or comments about the model**
Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of LLaMA is research on large language models, including:
exploring potential applications such as question answering, natural language understanding or reading comprehension,
understanding capabilities and limitations of current language models, and developing techniques to improve those,
evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
**Evaluation factors**
As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model.
## Metrics
**Model performance measures**
We use the following measure to evaluate the model:
- Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs,
- Exact match for question answering,
- The toxicity score from Perspective API on RealToxicityPrompts.
**Decision thresholds**
Not applicable.
**Approaches to uncertainty and variability**
Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training.
## Evaluation datasets
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs.
## Training dataset
The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
Hyperparameters for the model architecture
<table>
<thead>
<tr>
<th >LLaMA</th> <th colspan=6>Model hyper parameters </th>
</tr>
<tr>
<th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
<tr>
<th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
</tbody>
</table>
*Table 1 - Summary of LLama Model Hyperparameters*
We present our results on eight standard common sense reasoning benchmarks in the table below.
<table>
<thead>
<tr>
<th>LLaMA</th> <th colspan=9>Reasoning tasks </th>
</tr>
<tr>
<th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
</th>
<tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
</th>
<tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
</th>
<tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
</tbody>
</table>
*Table 2 - Summary of LLama Model Performance on Reasoning tasks*
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
| No | Category | FAIR LLM |
| --- | -------------------- | -------- |
| 1 | Gender | 70.6 |
| 2 | Religion | 79 |
| 3 | Race/Color | 57 |
| 4 | Sexual orientation | 81 |
| 5 | Age | 70.1 |
| 6 | Nationality | 64.2 |
| 7 | Disability | 66.7 |
| 8 | Physical appearance | 77.8 |
| 9 | Socioeconomic status | 71.5 |
| | LLaMA Average | 66.6 |
*Table 3 - Summary bias of our model output*
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Mitigations**
We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
|
{"language": ["en"], "library_name": "transformers"}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,323 |
TransferGraph/IMSyPP_hate_speech_it-finetuned-lora-ag_news
|
TransferGraph
|
text-classification
|
[
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:ag_news",
"base_model:IMSyPP/hate_speech_it",
"base_model:adapter:IMSyPP/hate_speech_it",
"license:mit",
"model-index",
"region:us"
] | 2024-02-27T23:19:23Z |
2024-02-28T01:13:59+00:00
| 3 | 0 |
---
base_model: IMSyPP/hate_speech_it
datasets:
- ag_news
library_name: peft
license: mit
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: IMSyPP_hate_speech_it-finetuned-lora-ag_news
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: ag_news
type: ag_news
config: default
split: test
args: default
metrics:
- type: accuracy
value: 0.8935526315789474
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IMSyPP_hate_speech_it-finetuned-lora-ag_news
This model is a fine-tuned version of [IMSyPP/hate_speech_it](https://huggingface.co/IMSyPP/hate_speech_it) on the ag_news dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.8936
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.25 | None | 0 |
| 0.8780 | 0.4558 | 0 |
| 0.8858 | 0.3405 | 1 |
| 0.8928 | 0.3146 | 2 |
| 0.8936 | 0.2976 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IMSyPP_hate_speech_it-finetuned-lora-ag_news
This model is a fine-tuned version of [IMSyPP/hate_speech_it](https://huggingface.co/IMSyPP/hate_speech_it) on the ag_news dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.8936
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.25 | None | 0 |
| 0.8780 | 0.4558 | 0 |
| 0.8858 | 0.3405 | 1 |
| 0.8928 | 0.3146 | 2 |
| 0.8936 | 0.2976 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
|
{"base_model": "IMSyPP/hate_speech_it", "datasets": ["ag_news"], "library_name": "peft", "license": "mit", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "IMSyPP_hate_speech_it-finetuned-lora-ag_news", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "ag_news", "type": "ag_news", "config": "default", "split": "test", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.8935526315789474, "name": "accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,324 |
Laihaoran/BioClinical_M_P_CTRATE_BERT
|
Laihaoran
| null |
[
"pytorch",
"bert",
"arxiv:1910.09700",
"region:us"
] | 2024-09-16T18:35:45Z |
2024-09-16T18:42:28+00:00
| 7 | 0 |
---
{}
---
# Bio Clinical BERT with MIMIC and Padchest
<!-- Provide a quick summary of what the model is/does. -->
The Publicly Available Clinical BERT Embeddings paper contains four unique BioclinicalBERT models: initialized with BERT-Base (biobert_v1.0_pubmed_pmc) or BioBERT (BioBERT-Base v1.0 + PubMed 200K + PMC 270K) & trained on either all MIMIC notes or only discharge summaries.
This model card describes the Bio+Clinical BERT model, which was initialized from BioBERT & trained on all MIMIC notes and Padchest (translate to English by google translation).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| null |
BioNLP
|
# Bio Clinical BERT with MIMIC and Padchest
<!-- Provide a quick summary of what the model is/does. -->
The Publicly Available Clinical BERT Embeddings paper contains four unique BioclinicalBERT models: initialized with BERT-Base (biobert_v1.0_pubmed_pmc) or BioBERT (BioBERT-Base v1.0 + PubMed 200K + PMC 270K) & trained on either all MIMIC notes or only discharge summaries.
This model card describes the Bio+Clinical BERT model, which was initialized from BioBERT & trained on all MIMIC notes and Padchest (translate to English by google translation).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{}
|
task
|
[
"TRANSLATION"
] | 42,325 |
gaudi/opus-mt-es-niu-ctranslate2
|
gaudi
|
translation
|
[
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-07-22T15:46:27Z |
2024-10-19T02:54:07+00:00
| 6 | 0 |
---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-es-niu)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-es-niu).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-es-niu --output_dir ./ctranslate2/opus-mt-es-niu-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-es-niu-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-es-niu-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-es-niu-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-es-niu) by Helsinki-NLP.
| null |
Non_BioNLP
|
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-es-niu)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-es-niu).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-es-niu --output_dir ./ctranslate2/opus-mt-es-niu-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-es-niu-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-es-niu-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-es-niu-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-es-niu) by Helsinki-NLP.
|
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
|
task
|
[
"TRANSLATION"
] | 42,326 |
Spencer29496/mt5-small-finetuned-amazon-en-es
|
Spencer29496
|
summarization
|
[
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-12-05T00:12:15Z |
2024-12-05T02:54:54+00:00
| 20 | 0 |
---
base_model: google/mt5-small
library_name: transformers
license: apache-2.0
tags:
- summarization
- generated_from_trainer
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 8
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 8
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
{"base_model": "google/mt5-small", "library_name": "transformers", "license": "apache-2.0", "tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "mt5-small-finetuned-amazon-en-es", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 42,327 |
RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf
|
RichardErkhov
| null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-10-03T11:22:31Z |
2024-10-03T22:25:17+00:00
| 311 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
c4ai-command-r-08-2024 - GGUF
- Model creator: https://huggingface.co/SillyTilly/
- Original model: https://huggingface.co/SillyTilly/c4ai-command-r-08-2024/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [c4ai-command-r-08-2024.Q2_K.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q2_K.gguf) | Q2_K | 11.93GB |
| [c4ai-command-r-08-2024.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.IQ3_XS.gguf) | IQ3_XS | 13.08GB |
| [c4ai-command-r-08-2024.IQ3_S.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.IQ3_S.gguf) | IQ3_S | 13.74GB |
| [c4ai-command-r-08-2024.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q3_K_S.gguf) | Q3_K_S | 13.7GB |
| [c4ai-command-r-08-2024.IQ3_M.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.IQ3_M.gguf) | IQ3_M | 14.2GB |
| [c4ai-command-r-08-2024.Q3_K.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q3_K.gguf) | Q3_K | 15.12GB |
| [c4ai-command-r-08-2024.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q3_K_M.gguf) | Q3_K_M | 15.12GB |
| [c4ai-command-r-08-2024.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q3_K_L.gguf) | Q3_K_L | 16.36GB |
| [c4ai-command-r-08-2024.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.IQ4_XS.gguf) | IQ4_XS | 16.75GB |
| [c4ai-command-r-08-2024.Q4_0.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q4_0.gguf) | Q4_0 | 17.43GB |
| [c4ai-command-r-08-2024.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.IQ4_NL.gguf) | IQ4_NL | 17.59GB |
| [c4ai-command-r-08-2024.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q4_K_S.gguf) | Q4_K_S | 17.55GB |
| [c4ai-command-r-08-2024.Q4_K.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q4_K.gguf) | Q4_K | 18.44GB |
| [c4ai-command-r-08-2024.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q4_K_M.gguf) | Q4_K_M | 18.44GB |
| [c4ai-command-r-08-2024.Q4_1.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q4_1.gguf) | Q4_1 | 19.19GB |
| [c4ai-command-r-08-2024.Q5_0.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q5_0.gguf) | Q5_0 | 20.95GB |
| [c4ai-command-r-08-2024.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q5_K_S.gguf) | Q5_K_S | 20.95GB |
| [c4ai-command-r-08-2024.Q5_K.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q5_K.gguf) | Q5_K | 21.47GB |
| [c4ai-command-r-08-2024.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q5_K_M.gguf) | Q5_K_M | 21.47GB |
| [c4ai-command-r-08-2024.Q5_1.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q5_1.gguf) | Q5_1 | 22.71GB |
| [c4ai-command-r-08-2024.Q6_K.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q6_K.gguf) | Q6_K | 24.68GB |
| [c4ai-command-r-08-2024.Q8_0.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q8_0.gguf) | Q8_0 | 31.97GB |
Original model description:
---
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
license: cc-by-nc-4.0
library_name: transformers
extra_gated_prompt: "By submitting this form, you agree to the [License Agreement](https://cohere.com/c4ai-cc-by-nc-license) and acknowledge that the information you provide will be collected, used, and shared in accordance with Cohere’s [Privacy Policy]( https://cohere.com/privacy)."
extra_gated_fields:
Name: text
Affiliation: text
Country:
type: select
options:
- Aruba
- Afghanistan
- Angola
- Anguilla
- Åland Islands
- Albania
- Andorra
- United Arab Emirates
- Argentina
- Armenia
- American Samoa
- Antarctica
- French Southern Territories
- Antigua and Barbuda
- Australia
- Austria
- Azerbaijan
- Burundi
- Belgium
- Benin
- Bonaire Sint Eustatius and Saba
- Burkina Faso
- Bangladesh
- Bulgaria
- Bahrain
- Bahamas
- Bosnia and Herzegovina
- Saint Barthélemy
- Belarus
- Belize
- Bermuda
- Plurinational State of Bolivia
- Brazil
- Barbados
- Brunei-Darussalam
- Bhutan
- Bouvet-Island
- Botswana
- Central African Republic
- Canada
- Cocos (Keeling) Islands
- Switzerland
- Chile
- China
- Côte-dIvoire
- Cameroon
- Democratic Republic of the Congo
- Cook Islands
- Colombia
- Comoros
- Cabo Verde
- Costa Rica
- Cuba
- Curaçao
- Christmas Island
- Cayman Islands
- Cyprus
- Czechia
- Germany
- Djibouti
- Dominica
- Denmark
- Dominican Republic
- Algeria
- Ecuador
- Egypt
- Eritrea
- Western Sahara
- Spain
- Estonia
- Ethiopia
- Finland
- Fiji
- Falkland Islands (Malvinas)
- France
- Faroe Islands
- Federated States of Micronesia
- Gabon
- United Kingdom
- Georgia
- Guernsey
- Ghana
- Gibraltar
- Guinea
- Guadeloupe
- Gambia
- Guinea Bissau
- Equatorial Guinea
- Greece
- Grenada
- Greenland
- Guatemala
- French Guiana
- Guam
- Guyana
- Hong Kong
- Heard Island and McDonald Islands
- Honduras
- Croatia
- Haiti
- Hungary
- Indonesia
- Isle of Man
- India
- British Indian Ocean Territory
- Ireland
- Islamic Republic of Iran
- Iraq
- Iceland
- Israel
- Italy
- Jamaica
- Jersey
- Jordan
- Japan
- Kazakhstan
- Kenya
- Kyrgyzstan
- Cambodia
- Kiribati
- Saint-Kitts-and-Nevis
- South Korea
- Kuwait
- Lao-Peoples-Democratic-Republic
- Lebanon
- Liberia
- Libya
- Saint-Lucia
- Liechtenstein
- Sri Lanka
- Lesotho
- Lithuania
- Luxembourg
- Latvia
- Macao
- Saint Martin (French-part)
- Morocco
- Monaco
- Republic of Moldova
- Madagascar
- Maldives
- Mexico
- Marshall Islands
- North Macedonia
- Mali
- Malta
- Myanmar
- Montenegro
- Mongolia
- Northern Mariana Islands
- Mozambique
- Mauritania
- Montserrat
- Martinique
- Mauritius
- Malawi
- Malaysia
- Mayotte
- Namibia
- New Caledonia
- Niger
- Norfolk Island
- Nigeria
- Nicaragua
- Niue
- Netherlands
- Norway
- Nepal
- Nauru
- New Zealand
- Oman
- Pakistan
- Panama
- Pitcairn
- Peru
- Philippines
- Palau
- Papua New Guinea
- Poland
- Puerto Rico
- North Korea
- Portugal
- Paraguay
- State of Palestine
- French Polynesia
- Qatar
- Réunion
- Romania
- Russia
- Rwanda
- Saudi Arabia
- Sudan
- Senegal
- Singapore
- South Georgia and the South Sandwich Islands
- Saint Helena Ascension and Tristan da Cunha
- Svalbard and Jan Mayen
- Solomon Islands
- Sierra Leone
- El Salvador
- San Marino
- Somalia
- Saint Pierre and Miquelon
- Serbia
- South Sudan
- Sao Tome and Principe
- Suriname
- Slovakia
- Slovenia
- Sweden
- Eswatini
- Sint Maarten (Dutch-part)
- Seychelles
- Syrian Arab Republic
- Turks and Caicos Islands
- Chad
- Togo
- Thailand
- Tajikistan
- Tokelau
- Turkmenistan
- Timor Leste
- Tonga
- Trinidad and Tobago
- Tunisia
- Turkey
- Tuvalu
- Taiwan
- United Republic of Tanzania
- Uganda
- Ukraine
- United States Minor Outlying Islands
- Uruguay
- United-States
- Uzbekistan
- Holy See (Vatican City State)
- Saint Vincent and the Grenadines
- Bolivarian Republic of Venezuela
- Virgin Islands British
- Virgin Islands U.S.
- VietNam
- Vanuatu
- Wallis and Futuna
- Samoa
- Yemen
- South Africa
- Zambia
- Zimbabwe
Receive email updates on C4AI and Cohere research, events, products and services?:
type: select
options:
- Yes
- No
I agree to use this model for non-commercial use ONLY: checkbox
---
# Model Card for C4AI Command R 08-2024
## Model Summary
<!-- Provide a quick summary of what the model is/does. -->
C4AI Command R 08-2024 is a research release of a 35 billion parameter highly performant generative model. Command R 08-2024 is a large language model with open weights optimized for a variety of use cases including reasoning, summarization, and question answering. Command R 08-2024 has the capability for multilingual generation, trained on 23 languages and evaluated in 10 languages and highly performant RAG capabilities.
Developed by: Cohere and [Cohere For AI](https://cohere.for.ai)
- Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/)
- License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)
- Model: c4ai-command-r-08-2024
- Model Size: 35 billion parameters
- Context length: 128K
**Try C4AI Command R**
If you want to try Command R before downloading the weights, the model is hosted in a hugging face space [here](https://huggingface.co/spaces/CohereForAI/c4ai-command?model=command-r-08-2024).
**Usage**
Please use `transformers` version 4.39.1 or higher
```python
# pip install 'transformers>=4.39.1'
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereForAI/c4ai-command-r-08-2024"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format message with the command-r-08-2024 chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
## Model Details
**Input**: Models input text only.
**Output**: Models generate text only.
**Model Architecture**: This is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety. We use grouped query attention (GQA) to improve inference speed.
**Languages covered**: The model has been trained on 23 languages (English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, Simplified Chinese, Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, and Persian) and evaluated on 10 languages (English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, Simplified Chinese).
**Context length**: Command R 08-2024 supports a context length of 128K.
### Tool use & Agent capabilities:
Command R 08-2024 has been specifically trained with conversational tool use capabilities. These have been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template will likely reduce performance.
Command R 08-2024’s tool use functionality takes a conversation as input (with an optional user-system preamble), along with a list of available tools. The model will then generate a json-formatted list of actions to execute on a subset of those tools. Command R 08-2024 may use one of its supplied tools more than once.
The model has been trained to recognise a special `directly_answer` tool, which it uses to indicate that it doesn’t want to use any of its other tools. The ability to abstain from calling a specific tool can be useful in a range of situations, such as greeting a user, or asking clarifying questions. We recommend including the `directly_answer` tool, but it can be removed or renamed if required.
Comprehensive documentation for working with Command R 08-2024's tool use prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r).
Command R 08-2024 also supports Hugging Face's [tool use API](https://huggingface.co/docs/transformers/main/en/chat_templating#advanced-tool-use--function-calling)
The code snippet below shows a minimal working example on how to render a prompt.
<details>
<summary><b>Usage: Rendering Tool Use Prompts [CLICK TO EXPAND]</b> </summary>
```python
from transformers import AutoTokenizer
model_id = "CohereForAI/c4ai-command-r-08-2024"
tokenizer = AutoTokenizer.from_pretrained(model_id)
# define conversation input:
conversation = [
{"role": "user", "content": "Whats the biggest penguin in the world?"}
]
# Define tools available for the model to use:
tools = [
{
"name": "internet_search",
"description": "Returns a list of relevant document snippets for a textual query retrieved from the internet",
"parameter_definitions": {
"query": {
"description": "Query to search the internet with",
"type": 'str',
"required": True
}
}
},
{
'name': "directly_answer",
"description": "Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history",
'parameter_definitions': {}
}
]
# render the tool use prompt as a string:
tool_use_prompt = tokenizer.apply_tool_use_template(
conversation,
tools=tools,
tokenize=False,
add_generation_prompt=True,
)
print(tool_use_prompt)
```
</details>
<details>
<summary><b>Usage: Rendering prompts with the Tool Use API [CLICK TO EXPAND]</b> </summary>
```python
from transformers import AutoTokenizer
model_id = "CohereForAI/c4ai-command-r-08-2024"
tokenizer = AutoTokenizer.from_pretrained(model_id)
# define conversation input:
conversation = [
{"role": "user", "content": "Whats the biggest penguin in the world?"}
]
# Define tools available for the model to use
# Type hints and docstrings from Python functions are automatically extracted
def internet_search(query: str):
"""
Returns a list of relevant document snippets for a textual query retrieved from the internet
Args:
query: Query to search the internet with
"""
pass
def directly_answer():
"""
Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history
"""
pass
tools = [internet_search, directly_answer]
# render the tool use prompt as a string:
tool_use_prompt = tokenizer.apply_chat_template(
conversation,
tools=tools,
tokenize=False,
add_generation_prompt=True,
)
print(tool_use_prompt)
```
</details>
<details>
<summary><b>Example Rendered Tool Use Prompt [CLICK TO EXPAND]</b></summary>
````
<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble
The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral.
# System Preamble
## Basic Rules
You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions.
# User Preamble
## Task and Context
You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging.
## Style Guide
Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.
## Available Tools
Here is a list of tools that you have available to you:
```python
def internet_search(query: str) -> List[Dict]:
"""Returns a list of relevant document snippets for a textual query retrieved from the internet
Args:
query (str): Query to search the internet with
"""
pass
```
```python
def directly_answer() -> List[Dict]:
"""Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history
"""
pass
```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Write 'Action:' followed by a json-formatted list of actions that you want to perform in order to produce a good response to the user's last input. You can use any of the supplied tools any number of times, but you should aim to execute the minimum number of necessary actions for the input. You should use the `directly-answer` tool if calling the other tools is unnecessary. The list of actions you want to call should be formatted as a list of json objects, for example:
```json
[
{
"tool_name": title of the tool in the specification,
"parameters": a dict of parameters to input into the tool as they are defined in the specs, or {} if it takes no parameters
}
]```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
````
</details>
<details>
<summary><b>Example Rendered Tool Use Completion [CLICK TO EXPAND]</b></summary>
````
Action: ```json
[
{
"tool_name": "internet_search",
"parameters": {
"query": "biggest penguin in the world"
}
}
]
```
````
</details>
### Grounded Generation and RAG Capabilities:
Command R 08-2024 has been specifically trained with grounded generation capabilities. This means that it can generate responses based on a list of supplied document snippets, and it will include grounding spans (citations) in its response indicating the source of the information. This can be used to enable behaviors such as grounded summarization and the final step of Retrieval Augmented Generation (RAG).This behavior has been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template may reduce performance, but we encourage experimentation.
Command R 08-2024’s grounded generation behavior takes a conversation as input (with an optional user-supplied system preamble, indicating task, context and desired output style), along with a list of retrieved document snippets. The document snippets should be chunks, rather than long documents, typically around 100-400 words per chunk. Document snippets consist of key-value pairs. The keys should be short descriptive strings, the values can be text or semi-structured.
By default, Command R 08-2024 will generate grounded responses by first predicting which documents are relevant, then predicting which ones it will cite, then generating an answer. Finally, it will then insert grounding spans into the answer. See below for an example. This is referred to as `accurate` grounded generation.
The model is trained with a number of other answering modes, which can be selected by prompt changes. A `fast` citation mode is supported in the tokenizer, which will directly generate an answer with grounding spans in it, without first writing the answer out in full. This sacrifices some grounding accuracy in favor of generating fewer tokens.
Comprehensive documentation for working with Command R 08-2024's grounded generation prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r).
The code snippet below shows a minimal working example on how to render a prompt.
<details>
<summary> <b>Usage: Rendering Grounded Generation prompts [CLICK TO EXPAND]</b> </summary>
````python
from transformers import AutoTokenizer
model_id = "CohereForAI/c4ai-command-r-08-2024"
tokenizer = AutoTokenizer.from_pretrained(model_id)
# define conversation input:
conversation = [
{"role": "user", "content": "Whats the biggest penguin in the world?"}
]
# define documents to ground on:
documents = [
{ "title": "Tall penguins", "text": "Emperor penguins are the tallest growing up to 122 cm in height." },
{ "title": "Penguin habitats", "text": "Emperor penguins only live in Antarctica."}
]
# render the tool use prompt as a string:
grounded_generation_prompt = tokenizer.apply_grounded_generation_template(
conversation,
documents=documents,
citation_mode="accurate", # or "fast"
tokenize=False,
add_generation_prompt=True,
)
print(grounded_generation_prompt)
````
</details>
<details>
<summary><b>Example Rendered Grounded Generation Prompt [CLICK TO EXPAND]</b></summary>
````
<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble
The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral.
# System Preamble
## Basic Rules
You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions.
# User Preamble
## Task and Context
You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging.
## Style Guide
Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|><results>
Document: 0
title: Tall penguins
text: Emperor penguins are the tallest growing up to 122 cm in height.
Document: 1
title: Penguin habitats
text: Emperor penguins only live in Antarctica.
</results><|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Carefully perform the following instructions, in order, starting each with a new line.
Firstly, Decide which of the retrieved documents are relevant to the user's last input by writing 'Relevant Documents:' followed by comma-separated list of document numbers. If none are relevant, you should instead write 'None'.
Secondly, Decide which of the retrieved documents contain facts that should be cited in a good answer to the user's last input by writing 'Cited Documents:' followed a comma-separated list of document numbers. If you dont want to cite any of them, you should instead write 'None'.
Thirdly, Write 'Answer:' followed by a response to the user's last input in high quality natural english. Use the retrieved documents to help you. Do not insert any citations or grounding markup.
Finally, Write 'Grounded answer:' followed by a response to the user's last input in high quality natural english. Use the symbols <co: doc> and </co: doc> to indicate when a fact comes from a document in the search result, e.g <co: 0>my fact</co: 0> for a fact from document 0.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
````
</details>
<details>
<summary><b>Example Rendered Grounded Generation Completion [CLICK TO EXPAND]</b></summary>
````
Relevant Documents: 0,1
Cited Documents: 0,1
Answer: The Emperor Penguin is the tallest or biggest penguin in the world. It is a bird that lives only in Antarctica and grows to a height of around 122 centimetres.
Grounded answer: The <co: 0>Emperor Penguin</co: 0> is the <co: 0>tallest</co: 0> or biggest penguin in the world. It is a bird that <co: 1>lives only in Antarctica</co: 1> and <co: 0>grows to a height of around 122 centimetres.</co: 0>
````
</details>
### Code Capabilities:
Command R 08-2024 has been optimized to interact with your code, by requesting code snippets, code explanations, or code rewrites. It might not perform well out-of-the-box for pure code completion. For better performance, we also recommend using a low temperature (and even greedy decoding) for code-generation related instructions.
### Model Card Contact
For errors or additional questions about details in this model card, contact [[email protected]](mailto:[email protected]).
### Terms of Use:
We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant 35 billion parameter model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
### Try Chat:
You can try Command-R chat in the playground [here](https://dashboard.cohere.com/playground/chat).
| null |
Non_BioNLP
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
c4ai-command-r-08-2024 - GGUF
- Model creator: https://huggingface.co/SillyTilly/
- Original model: https://huggingface.co/SillyTilly/c4ai-command-r-08-2024/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [c4ai-command-r-08-2024.Q2_K.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q2_K.gguf) | Q2_K | 11.93GB |
| [c4ai-command-r-08-2024.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.IQ3_XS.gguf) | IQ3_XS | 13.08GB |
| [c4ai-command-r-08-2024.IQ3_S.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.IQ3_S.gguf) | IQ3_S | 13.74GB |
| [c4ai-command-r-08-2024.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q3_K_S.gguf) | Q3_K_S | 13.7GB |
| [c4ai-command-r-08-2024.IQ3_M.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.IQ3_M.gguf) | IQ3_M | 14.2GB |
| [c4ai-command-r-08-2024.Q3_K.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q3_K.gguf) | Q3_K | 15.12GB |
| [c4ai-command-r-08-2024.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q3_K_M.gguf) | Q3_K_M | 15.12GB |
| [c4ai-command-r-08-2024.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q3_K_L.gguf) | Q3_K_L | 16.36GB |
| [c4ai-command-r-08-2024.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.IQ4_XS.gguf) | IQ4_XS | 16.75GB |
| [c4ai-command-r-08-2024.Q4_0.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q4_0.gguf) | Q4_0 | 17.43GB |
| [c4ai-command-r-08-2024.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.IQ4_NL.gguf) | IQ4_NL | 17.59GB |
| [c4ai-command-r-08-2024.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q4_K_S.gguf) | Q4_K_S | 17.55GB |
| [c4ai-command-r-08-2024.Q4_K.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q4_K.gguf) | Q4_K | 18.44GB |
| [c4ai-command-r-08-2024.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q4_K_M.gguf) | Q4_K_M | 18.44GB |
| [c4ai-command-r-08-2024.Q4_1.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q4_1.gguf) | Q4_1 | 19.19GB |
| [c4ai-command-r-08-2024.Q5_0.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q5_0.gguf) | Q5_0 | 20.95GB |
| [c4ai-command-r-08-2024.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q5_K_S.gguf) | Q5_K_S | 20.95GB |
| [c4ai-command-r-08-2024.Q5_K.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q5_K.gguf) | Q5_K | 21.47GB |
| [c4ai-command-r-08-2024.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q5_K_M.gguf) | Q5_K_M | 21.47GB |
| [c4ai-command-r-08-2024.Q5_1.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q5_1.gguf) | Q5_1 | 22.71GB |
| [c4ai-command-r-08-2024.Q6_K.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q6_K.gguf) | Q6_K | 24.68GB |
| [c4ai-command-r-08-2024.Q8_0.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_c4ai-command-r-08-2024-gguf/blob/main/c4ai-command-r-08-2024.Q8_0.gguf) | Q8_0 | 31.97GB |
Original model description:
---
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
license: cc-by-nc-4.0
library_name: transformers
extra_gated_prompt: "By submitting this form, you agree to the [License Agreement](https://cohere.com/c4ai-cc-by-nc-license) and acknowledge that the information you provide will be collected, used, and shared in accordance with Cohere’s [Privacy Policy]( https://cohere.com/privacy)."
extra_gated_fields:
Name: text
Affiliation: text
Country:
type: select
options:
- Aruba
- Afghanistan
- Angola
- Anguilla
- Åland Islands
- Albania
- Andorra
- United Arab Emirates
- Argentina
- Armenia
- American Samoa
- Antarctica
- French Southern Territories
- Antigua and Barbuda
- Australia
- Austria
- Azerbaijan
- Burundi
- Belgium
- Benin
- Bonaire Sint Eustatius and Saba
- Burkina Faso
- Bangladesh
- Bulgaria
- Bahrain
- Bahamas
- Bosnia and Herzegovina
- Saint Barthélemy
- Belarus
- Belize
- Bermuda
- Plurinational State of Bolivia
- Brazil
- Barbados
- Brunei-Darussalam
- Bhutan
- Bouvet-Island
- Botswana
- Central African Republic
- Canada
- Cocos (Keeling) Islands
- Switzerland
- Chile
- China
- Côte-dIvoire
- Cameroon
- Democratic Republic of the Congo
- Cook Islands
- Colombia
- Comoros
- Cabo Verde
- Costa Rica
- Cuba
- Curaçao
- Christmas Island
- Cayman Islands
- Cyprus
- Czechia
- Germany
- Djibouti
- Dominica
- Denmark
- Dominican Republic
- Algeria
- Ecuador
- Egypt
- Eritrea
- Western Sahara
- Spain
- Estonia
- Ethiopia
- Finland
- Fiji
- Falkland Islands (Malvinas)
- France
- Faroe Islands
- Federated States of Micronesia
- Gabon
- United Kingdom
- Georgia
- Guernsey
- Ghana
- Gibraltar
- Guinea
- Guadeloupe
- Gambia
- Guinea Bissau
- Equatorial Guinea
- Greece
- Grenada
- Greenland
- Guatemala
- French Guiana
- Guam
- Guyana
- Hong Kong
- Heard Island and McDonald Islands
- Honduras
- Croatia
- Haiti
- Hungary
- Indonesia
- Isle of Man
- India
- British Indian Ocean Territory
- Ireland
- Islamic Republic of Iran
- Iraq
- Iceland
- Israel
- Italy
- Jamaica
- Jersey
- Jordan
- Japan
- Kazakhstan
- Kenya
- Kyrgyzstan
- Cambodia
- Kiribati
- Saint-Kitts-and-Nevis
- South Korea
- Kuwait
- Lao-Peoples-Democratic-Republic
- Lebanon
- Liberia
- Libya
- Saint-Lucia
- Liechtenstein
- Sri Lanka
- Lesotho
- Lithuania
- Luxembourg
- Latvia
- Macao
- Saint Martin (French-part)
- Morocco
- Monaco
- Republic of Moldova
- Madagascar
- Maldives
- Mexico
- Marshall Islands
- North Macedonia
- Mali
- Malta
- Myanmar
- Montenegro
- Mongolia
- Northern Mariana Islands
- Mozambique
- Mauritania
- Montserrat
- Martinique
- Mauritius
- Malawi
- Malaysia
- Mayotte
- Namibia
- New Caledonia
- Niger
- Norfolk Island
- Nigeria
- Nicaragua
- Niue
- Netherlands
- Norway
- Nepal
- Nauru
- New Zealand
- Oman
- Pakistan
- Panama
- Pitcairn
- Peru
- Philippines
- Palau
- Papua New Guinea
- Poland
- Puerto Rico
- North Korea
- Portugal
- Paraguay
- State of Palestine
- French Polynesia
- Qatar
- Réunion
- Romania
- Russia
- Rwanda
- Saudi Arabia
- Sudan
- Senegal
- Singapore
- South Georgia and the South Sandwich Islands
- Saint Helena Ascension and Tristan da Cunha
- Svalbard and Jan Mayen
- Solomon Islands
- Sierra Leone
- El Salvador
- San Marino
- Somalia
- Saint Pierre and Miquelon
- Serbia
- South Sudan
- Sao Tome and Principe
- Suriname
- Slovakia
- Slovenia
- Sweden
- Eswatini
- Sint Maarten (Dutch-part)
- Seychelles
- Syrian Arab Republic
- Turks and Caicos Islands
- Chad
- Togo
- Thailand
- Tajikistan
- Tokelau
- Turkmenistan
- Timor Leste
- Tonga
- Trinidad and Tobago
- Tunisia
- Turkey
- Tuvalu
- Taiwan
- United Republic of Tanzania
- Uganda
- Ukraine
- United States Minor Outlying Islands
- Uruguay
- United-States
- Uzbekistan
- Holy See (Vatican City State)
- Saint Vincent and the Grenadines
- Bolivarian Republic of Venezuela
- Virgin Islands British
- Virgin Islands U.S.
- VietNam
- Vanuatu
- Wallis and Futuna
- Samoa
- Yemen
- South Africa
- Zambia
- Zimbabwe
Receive email updates on C4AI and Cohere research, events, products and services?:
type: select
options:
- Yes
- No
I agree to use this model for non-commercial use ONLY: checkbox
---
# Model Card for C4AI Command R 08-2024
## Model Summary
<!-- Provide a quick summary of what the model is/does. -->
C4AI Command R 08-2024 is a research release of a 35 billion parameter highly performant generative model. Command R 08-2024 is a large language model with open weights optimized for a variety of use cases including reasoning, summarization, and question answering. Command R 08-2024 has the capability for multilingual generation, trained on 23 languages and evaluated in 10 languages and highly performant RAG capabilities.
Developed by: Cohere and [Cohere For AI](https://cohere.for.ai)
- Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/)
- License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)
- Model: c4ai-command-r-08-2024
- Model Size: 35 billion parameters
- Context length: 128K
**Try C4AI Command R**
If you want to try Command R before downloading the weights, the model is hosted in a hugging face space [here](https://huggingface.co/spaces/CohereForAI/c4ai-command?model=command-r-08-2024).
**Usage**
Please use `transformers` version 4.39.1 or higher
```python
# pip install 'transformers>=4.39.1'
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereForAI/c4ai-command-r-08-2024"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format message with the command-r-08-2024 chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
## Model Details
**Input**: Models input text only.
**Output**: Models generate text only.
**Model Architecture**: This is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety. We use grouped query attention (GQA) to improve inference speed.
**Languages covered**: The model has been trained on 23 languages (English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, Simplified Chinese, Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, and Persian) and evaluated on 10 languages (English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, Simplified Chinese).
**Context length**: Command R 08-2024 supports a context length of 128K.
### Tool use & Agent capabilities:
Command R 08-2024 has been specifically trained with conversational tool use capabilities. These have been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template will likely reduce performance.
Command R 08-2024’s tool use functionality takes a conversation as input (with an optional user-system preamble), along with a list of available tools. The model will then generate a json-formatted list of actions to execute on a subset of those tools. Command R 08-2024 may use one of its supplied tools more than once.
The model has been trained to recognise a special `directly_answer` tool, which it uses to indicate that it doesn’t want to use any of its other tools. The ability to abstain from calling a specific tool can be useful in a range of situations, such as greeting a user, or asking clarifying questions. We recommend including the `directly_answer` tool, but it can be removed or renamed if required.
Comprehensive documentation for working with Command R 08-2024's tool use prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r).
Command R 08-2024 also supports Hugging Face's [tool use API](https://huggingface.co/docs/transformers/main/en/chat_templating#advanced-tool-use--function-calling)
The code snippet below shows a minimal working example on how to render a prompt.
<details>
<summary><b>Usage: Rendering Tool Use Prompts [CLICK TO EXPAND]</b> </summary>
```python
from transformers import AutoTokenizer
model_id = "CohereForAI/c4ai-command-r-08-2024"
tokenizer = AutoTokenizer.from_pretrained(model_id)
# define conversation input:
conversation = [
{"role": "user", "content": "Whats the biggest penguin in the world?"}
]
# Define tools available for the model to use:
tools = [
{
"name": "internet_search",
"description": "Returns a list of relevant document snippets for a textual query retrieved from the internet",
"parameter_definitions": {
"query": {
"description": "Query to search the internet with",
"type": 'str',
"required": True
}
}
},
{
'name': "directly_answer",
"description": "Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history",
'parameter_definitions': {}
}
]
# render the tool use prompt as a string:
tool_use_prompt = tokenizer.apply_tool_use_template(
conversation,
tools=tools,
tokenize=False,
add_generation_prompt=True,
)
print(tool_use_prompt)
```
</details>
<details>
<summary><b>Usage: Rendering prompts with the Tool Use API [CLICK TO EXPAND]</b> </summary>
```python
from transformers import AutoTokenizer
model_id = "CohereForAI/c4ai-command-r-08-2024"
tokenizer = AutoTokenizer.from_pretrained(model_id)
# define conversation input:
conversation = [
{"role": "user", "content": "Whats the biggest penguin in the world?"}
]
# Define tools available for the model to use
# Type hints and docstrings from Python functions are automatically extracted
def internet_search(query: str):
"""
Returns a list of relevant document snippets for a textual query retrieved from the internet
Args:
query: Query to search the internet with
"""
pass
def directly_answer():
"""
Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history
"""
pass
tools = [internet_search, directly_answer]
# render the tool use prompt as a string:
tool_use_prompt = tokenizer.apply_chat_template(
conversation,
tools=tools,
tokenize=False,
add_generation_prompt=True,
)
print(tool_use_prompt)
```
</details>
<details>
<summary><b>Example Rendered Tool Use Prompt [CLICK TO EXPAND]</b></summary>
````
<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble
The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral.
# System Preamble
## Basic Rules
You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions.
# User Preamble
## Task and Context
You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging.
## Style Guide
Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.
## Available Tools
Here is a list of tools that you have available to you:
```python
def internet_search(query: str) -> List[Dict]:
"""Returns a list of relevant document snippets for a textual query retrieved from the internet
Args:
query (str): Query to search the internet with
"""
pass
```
```python
def directly_answer() -> List[Dict]:
"""Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history
"""
pass
```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Write 'Action:' followed by a json-formatted list of actions that you want to perform in order to produce a good response to the user's last input. You can use any of the supplied tools any number of times, but you should aim to execute the minimum number of necessary actions for the input. You should use the `directly-answer` tool if calling the other tools is unnecessary. The list of actions you want to call should be formatted as a list of json objects, for example:
```json
[
{
"tool_name": title of the tool in the specification,
"parameters": a dict of parameters to input into the tool as they are defined in the specs, or {} if it takes no parameters
}
]```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
````
</details>
<details>
<summary><b>Example Rendered Tool Use Completion [CLICK TO EXPAND]</b></summary>
````
Action: ```json
[
{
"tool_name": "internet_search",
"parameters": {
"query": "biggest penguin in the world"
}
}
]
```
````
</details>
### Grounded Generation and RAG Capabilities:
Command R 08-2024 has been specifically trained with grounded generation capabilities. This means that it can generate responses based on a list of supplied document snippets, and it will include grounding spans (citations) in its response indicating the source of the information. This can be used to enable behaviors such as grounded summarization and the final step of Retrieval Augmented Generation (RAG).This behavior has been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template may reduce performance, but we encourage experimentation.
Command R 08-2024’s grounded generation behavior takes a conversation as input (with an optional user-supplied system preamble, indicating task, context and desired output style), along with a list of retrieved document snippets. The document snippets should be chunks, rather than long documents, typically around 100-400 words per chunk. Document snippets consist of key-value pairs. The keys should be short descriptive strings, the values can be text or semi-structured.
By default, Command R 08-2024 will generate grounded responses by first predicting which documents are relevant, then predicting which ones it will cite, then generating an answer. Finally, it will then insert grounding spans into the answer. See below for an example. This is referred to as `accurate` grounded generation.
The model is trained with a number of other answering modes, which can be selected by prompt changes. A `fast` citation mode is supported in the tokenizer, which will directly generate an answer with grounding spans in it, without first writing the answer out in full. This sacrifices some grounding accuracy in favor of generating fewer tokens.
Comprehensive documentation for working with Command R 08-2024's grounded generation prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r).
The code snippet below shows a minimal working example on how to render a prompt.
<details>
<summary> <b>Usage: Rendering Grounded Generation prompts [CLICK TO EXPAND]</b> </summary>
````python
from transformers import AutoTokenizer
model_id = "CohereForAI/c4ai-command-r-08-2024"
tokenizer = AutoTokenizer.from_pretrained(model_id)
# define conversation input:
conversation = [
{"role": "user", "content": "Whats the biggest penguin in the world?"}
]
# define documents to ground on:
documents = [
{ "title": "Tall penguins", "text": "Emperor penguins are the tallest growing up to 122 cm in height." },
{ "title": "Penguin habitats", "text": "Emperor penguins only live in Antarctica."}
]
# render the tool use prompt as a string:
grounded_generation_prompt = tokenizer.apply_grounded_generation_template(
conversation,
documents=documents,
citation_mode="accurate", # or "fast"
tokenize=False,
add_generation_prompt=True,
)
print(grounded_generation_prompt)
````
</details>
<details>
<summary><b>Example Rendered Grounded Generation Prompt [CLICK TO EXPAND]</b></summary>
````
<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble
The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral.
# System Preamble
## Basic Rules
You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions.
# User Preamble
## Task and Context
You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging.
## Style Guide
Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|><results>
Document: 0
title: Tall penguins
text: Emperor penguins are the tallest growing up to 122 cm in height.
Document: 1
title: Penguin habitats
text: Emperor penguins only live in Antarctica.
</results><|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Carefully perform the following instructions, in order, starting each with a new line.
Firstly, Decide which of the retrieved documents are relevant to the user's last input by writing 'Relevant Documents:' followed by comma-separated list of document numbers. If none are relevant, you should instead write 'None'.
Secondly, Decide which of the retrieved documents contain facts that should be cited in a good answer to the user's last input by writing 'Cited Documents:' followed a comma-separated list of document numbers. If you dont want to cite any of them, you should instead write 'None'.
Thirdly, Write 'Answer:' followed by a response to the user's last input in high quality natural english. Use the retrieved documents to help you. Do not insert any citations or grounding markup.
Finally, Write 'Grounded answer:' followed by a response to the user's last input in high quality natural english. Use the symbols <co: doc> and </co: doc> to indicate when a fact comes from a document in the search result, e.g <co: 0>my fact</co: 0> for a fact from document 0.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
````
</details>
<details>
<summary><b>Example Rendered Grounded Generation Completion [CLICK TO EXPAND]</b></summary>
````
Relevant Documents: 0,1
Cited Documents: 0,1
Answer: The Emperor Penguin is the tallest or biggest penguin in the world. It is a bird that lives only in Antarctica and grows to a height of around 122 centimetres.
Grounded answer: The <co: 0>Emperor Penguin</co: 0> is the <co: 0>tallest</co: 0> or biggest penguin in the world. It is a bird that <co: 1>lives only in Antarctica</co: 1> and <co: 0>grows to a height of around 122 centimetres.</co: 0>
````
</details>
### Code Capabilities:
Command R 08-2024 has been optimized to interact with your code, by requesting code snippets, code explanations, or code rewrites. It might not perform well out-of-the-box for pure code completion. For better performance, we also recommend using a low temperature (and even greedy decoding) for code-generation related instructions.
### Model Card Contact
For errors or additional questions about details in this model card, contact [[email protected]](mailto:[email protected]).
### Terms of Use:
We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant 35 billion parameter model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
### Try Chat:
You can try Command-R chat in the playground [here](https://dashboard.cohere.com/playground/chat).
|
{}
|
task
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 42,328 |
elinas/chronos-13b
|
elinas
|
text-generation
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"chatbot",
"storywriting",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-05-27T01:37:36Z |
2023-06-23T14:34:19+00:00
| 34 | 33 |
---
license: other
tags:
- llama
- pytorch
- chatbot
- storywriting
---
# chronos-13b
This is the fp16 PyTorch / HF version of **chronos-13b**
This model is primarily focused on chat, roleplay, and storywriting, but can accomplish other tasks such as simple reasoning and coding.
Chronos generates very long outputs with coherent text, largely due to the human inputs it was trained on.
This model uses Alpaca formatting, so for optimal model performance, use:
```
### Instruction:
Your instruction or question here.
### Response:
```
[4bit Quantized version](https://huggingface.co/elinas/chronos-13b-4bit)
[GGML Version provided by @TheBloke](https://huggingface.co/TheBloke/chronos-13B-GGML)
<!--**Support My Development of New Models**
<a href='https://ko-fi.com/Q5Q6MB734' target='_blank'><img height='36' style='border:0px;height:36px;'
src='https://storage.ko-fi.com/cdn/kofi1.png?v=3' border='0' alt='Support Development' /></a>-->
--
license: other
---
# LLaMA Model Card
## Model details
**Organization developing the model**
The FAIR team of Meta AI.
**Model date**
LLaMA was trained between December. 2022 and Feb. 2023.
**Model version**
This is version 1 of the model.
**Model type**
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
**Paper or resources for more information**
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/.
**Citations details**
https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
**License**
Non-commercial bespoke license
**Where to send questions or comments about the model**
Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of LLaMA is research on large language models, including:
exploring potential applications such as question answering, natural language understanding or reading comprehension,
understanding capabilities and limitations of current language models, and developing techniques to improve those,
evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
**Evaluation factors**
As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model.
## Metrics
**Model performance measures**
We use the following measure to evaluate the model:
- Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs,
- Exact match for question answering,
- The toxicity score from Perspective API on RealToxicityPrompts.
**Decision thresholds**
Not applicable.
**Approaches to uncertainty and variability**
Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training.
## Evaluation datasets
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs.
## Training dataset
The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
Hyperparameters for the model architecture
<table>
<thead>
<tr>
<th >LLaMA</th> <th colspan=6>Model hyper parameters </th>
</tr>
<tr>
<th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
<tr>
<th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
</tbody>
</table>
*Table 1 - Summary of LLama Model Hyperparameters*
We present our results on eight standard common sense reasoning benchmarks in the table below.
<table>
<thead>
<tr>
<th>LLaMA</th> <th colspan=9>Reasoning tasks </th>
</tr>
<tr>
<th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
</th>
<tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
</th>
<tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
</th>
<tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
</tbody>
</table>
*Table 2 - Summary of LLama Model Performance on Reasoning tasks*
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
| No | Category | FAIR LLM |
| --- | -------------------- | -------- |
| 1 | Gender | 70.6 |
| 2 | Religion | 79 |
| 3 | Race/Color | 57 |
| 4 | Sexual orientation | 81 |
| 5 | Age | 70.1 |
| 6 | Nationality | 64.2 |
| 7 | Disability | 66.7 |
| 8 | Physical appearance | 77.8 |
| 9 | Socioeconomic status | 71.5 |
| | LLaMA Average | 66.6 |
*Table 3 - Summary bias of our model output*
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Mitigations**
We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
| null |
Non_BioNLP
|
# chronos-13b
This is the fp16 PyTorch / HF version of **chronos-13b**
This model is primarily focused on chat, roleplay, and storywriting, but can accomplish other tasks such as simple reasoning and coding.
Chronos generates very long outputs with coherent text, largely due to the human inputs it was trained on.
This model uses Alpaca formatting, so for optimal model performance, use:
```
### Instruction:
Your instruction or question here.
### Response:
```
[4bit Quantized version](https://huggingface.co/elinas/chronos-13b-4bit)
[GGML Version provided by @TheBloke](https://huggingface.co/TheBloke/chronos-13B-GGML)
<!--**Support My Development of New Models**
<a href='https://ko-fi.com/Q5Q6MB734' target='_blank'><img height='36' style='border:0px;height:36px;'
src='https://storage.ko-fi.com/cdn/kofi1.png?v=3' border='0' alt='Support Development' /></a>-->
--
license: other
---
# LLaMA Model Card
## Model details
**Organization developing the model**
The FAIR team of Meta AI.
**Model date**
LLaMA was trained between December. 2022 and Feb. 2023.
**Model version**
This is version 1 of the model.
**Model type**
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
**Paper or resources for more information**
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/.
**Citations details**
https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
**License**
Non-commercial bespoke license
**Where to send questions or comments about the model**
Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of LLaMA is research on large language models, including:
exploring potential applications such as question answering, natural language understanding or reading comprehension,
understanding capabilities and limitations of current language models, and developing techniques to improve those,
evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
**Evaluation factors**
As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model.
## Metrics
**Model performance measures**
We use the following measure to evaluate the model:
- Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs,
- Exact match for question answering,
- The toxicity score from Perspective API on RealToxicityPrompts.
**Decision thresholds**
Not applicable.
**Approaches to uncertainty and variability**
Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training.
## Evaluation datasets
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs.
## Training dataset
The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
Hyperparameters for the model architecture
<table>
<thead>
<tr>
<th >LLaMA</th> <th colspan=6>Model hyper parameters </th>
</tr>
<tr>
<th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
<tr>
<th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
</tbody>
</table>
*Table 1 - Summary of LLama Model Hyperparameters*
We present our results on eight standard common sense reasoning benchmarks in the table below.
<table>
<thead>
<tr>
<th>LLaMA</th> <th colspan=9>Reasoning tasks </th>
</tr>
<tr>
<th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
</th>
<tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
</th>
<tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
</th>
<tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
</tbody>
</table>
*Table 2 - Summary of LLama Model Performance on Reasoning tasks*
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
| No | Category | FAIR LLM |
| --- | -------------------- | -------- |
| 1 | Gender | 70.6 |
| 2 | Religion | 79 |
| 3 | Race/Color | 57 |
| 4 | Sexual orientation | 81 |
| 5 | Age | 70.1 |
| 6 | Nationality | 64.2 |
| 7 | Disability | 66.7 |
| 8 | Physical appearance | 77.8 |
| 9 | Socioeconomic status | 71.5 |
| | LLaMA Average | 66.6 |
*Table 3 - Summary bias of our model output*
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Mitigations**
We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
|
{"license": "other", "tags": ["llama", "pytorch", "chatbot", "storywriting"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,329 |
Iulian277/ro-bart-large-512
|
Iulian277
|
summarization
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"summarization",
"ro",
"autotrain_compatible",
"region:us"
] | 2023-09-04T11:25:33Z |
2023-09-04T11:33:24+00:00
| 11 | 0 |
---
language:
- ro
tags:
- summarization
- bart
inference: false
---
This is a pretrained-from-scratch **BART large** model (**400M** parameters).
Training was performed on a clean **50GB Romanian** text corpus for 3M steps with these [scripts](https://github.com/cosmoquester/transformers-bart-pretrain). The model was trained with a maximum sequence length of **512**.
**!! IMPORTANT !!** This model was pretrained on the text corruption task, meaning this model is **not usable** in any downstream task **without finetuning** first!
| null |
Non_BioNLP
|
This is a pretrained-from-scratch **BART large** model (**400M** parameters).
Training was performed on a clean **50GB Romanian** text corpus for 3M steps with these [scripts](https://github.com/cosmoquester/transformers-bart-pretrain). The model was trained with a maximum sequence length of **512**.
**!! IMPORTANT !!** This model was pretrained on the text corruption task, meaning this model is **not usable** in any downstream task **without finetuning** first!
|
{"language": ["ro"], "tags": ["summarization", "bart"], "inference": false}
|
task
|
[
"SUMMARIZATION"
] | 42,330 |
mrapacz/interlinear-pl-mt5-large-emb-concat-normalized-ob
|
mrapacz
|
text2text-generation
|
[
"transformers",
"pytorch",
"morph-t5-concat",
"text2text-generation",
"pl",
"dataset:mrapacz/greek-interlinear-translations",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2025-02-08T12:25:10Z |
2025-02-21T21:31:20+00:00
| 8 | 0 |
---
base_model:
- mT5-large
datasets:
- mrapacz/greek-interlinear-translations
language:
- pl
library_name: transformers
license: cc-by-sa-4.0
metrics:
- bleu
---
# Model Card for Ancient Greek to Polish Interlinear Translation Model
This model performs interlinear translation from Ancient Greek to Polish, maintaining word-level alignment between source and target texts.
You can find the source code used for training this and other models trained as part of this project in the [GitHub repository](https://github.com/mrapacz/loreslm-interlinear-translation).
## Model Details
### Model Description
- **Developed By:** Maciej Rapacz, AGH University of Kraków
- **Model Type:** MorphT5ConcatForConditionalGeneration
- **Base Model:** mT5-large
- **Tokenizer:** mT5
- **Language(s):** Ancient Greek (source) → Polish (target)
- **License:** CC BY-NC-SA 4.0
- **Tag Set:** OB (Oblubienica)
- **Text Preprocessing:** Normalized
- **Morphological Encoding:** emb-concat
### Model Performance
- **BLEU Score:** 51.75
- **SemScore:** 0.92
### Model Sources
- **Repository:** https://github.com/mrapacz/loreslm-interlinear-translation
- **Paper:** https://aclanthology.org/2025.loreslm-1.11/
## Usage Example
> **Note**: This model uses a modification of T5-family models that includes dedicated embedding layers for encoding morphological information. To load these models, install the [morpht5](https://github.com/mrapacz/loreslm-interlinear-translation/blob/master/morpht5/README.md) package:
> ```bash
> pip install morpht5
> ```
```python
>>> from morpht5 import MorphT5ConcatForConditionalGeneration, MorphT5Tokenizer
>>> text = ['λεγει', 'αυτω', 'ο', 'ιησους', 'εγειρε', 'αρον', 'τον', 'κραβαττον', 'σου', 'και', 'περιπατει']
>>> tags = ['vi Pres Act 3 Sg', 'pp Dat Sg m', 't_ Nom Sg m', 'n_ Nom Sg m', 'vm Pres Act 2 Sg', 'vm Aor Act 2 Sg', 't_ Acc Sg m', 'n_ Acc Sg m', 'pp 2 Gen Sg', 'Conj', 'vm Pres Act 2 Sg']
>>> tokenizer = MorphT5Tokenizer.from_pretrained("mrapacz/interlinear-pl-mt5-large-emb-concat-normalized-ob")
>>> inputs = tokenizer(
text=text,
morph_tags=tags,
return_tensors="pt"
)
>>> model = MorphT5ConcatForConditionalGeneration.from_pretrained("mrapacz/interlinear-pl-mt5-large-emb-concat-normalized-ob")
>>> outputs = model.generate(
**inputs,
max_new_tokens=100,
early_stopping=True,
)
>>> decoded = tokenizer.decode(outputs[0], skip_special_tokens=True, keep_block_separator=True)
>>> decoded = decoded.replace(tokenizer.target_block_separator_token, " | ")
>>> decoded
'Mówi | mu | - | Jezus | wstawaj | weź | - | matę | swoją | i | chodź'
```
## Citation
If you use this model, please cite the following paper:
```
@inproceedings{rapacz-smywinski-pohl-2025-low,
title = "Low-Resource Interlinear Translation: Morphology-Enhanced Neural Models for {A}ncient {G}reek",
author = "Rapacz, Maciej and
Smywi{\'n}ski-Pohl, Aleksander",
editor = "Hettiarachchi, Hansi and
Ranasinghe, Tharindu and
Rayson, Paul and
Mitkov, Ruslan and
Gaber, Mohamed and
Premasiri, Damith and
Tan, Fiona Anting and
Uyangodage, Lasitha",
booktitle = "Proceedings of the First Workshop on Language Models for Low-Resource Languages",
month = jan,
year = "2025",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.loreslm-1.11/",
pages = "145--165",
abstract = "Contemporary machine translation systems prioritize fluent, natural-sounding output with flexible word ordering. In contrast, interlinear translation maintains the source text`s syntactic structure by aligning target language words directly beneath their source counterparts. Despite its importance in classical scholarship, automated approaches to interlinear translation remain understudied. We evaluated neural interlinear translation from Ancient Greek to English and Polish using four transformer-based models: two Ancient Greek-specialized (GreTa and PhilTa) and two general-purpose multilingual models (mT5-base and mT5-large). Our approach introduces novel morphological embedding layers and evaluates text preprocessing and tag set selection across 144 experimental configurations using a word-aligned parallel corpus of the Greek New Testament. Results show that morphological features through dedicated embedding layers significantly enhance translation quality, improving BLEU scores by 35{\%} (44.67 {\textrightarrow} 60.40) for English and 38{\%} (42.92 {\textrightarrow} 59.33) for Polish compared to baseline models. PhilTa achieves state-of-the-art performance for English, while mT5-large does so for Polish. Notably, PhilTa maintains stable performance using only 10{\%} of training data. Our findings challenge the assumption that modern neural architectures cannot benefit from explicit morphological annotations. While preprocessing strategies and tag set selection show minimal impact, the substantial gains from morphological embeddings demonstrate their value in low-resource scenarios."
}
```
| null |
Non_BioNLP
|
# Model Card for Ancient Greek to Polish Interlinear Translation Model
This model performs interlinear translation from Ancient Greek to Polish, maintaining word-level alignment between source and target texts.
You can find the source code used for training this and other models trained as part of this project in the [GitHub repository](https://github.com/mrapacz/loreslm-interlinear-translation).
## Model Details
### Model Description
- **Developed By:** Maciej Rapacz, AGH University of Kraków
- **Model Type:** MorphT5ConcatForConditionalGeneration
- **Base Model:** mT5-large
- **Tokenizer:** mT5
- **Language(s):** Ancient Greek (source) → Polish (target)
- **License:** CC BY-NC-SA 4.0
- **Tag Set:** OB (Oblubienica)
- **Text Preprocessing:** Normalized
- **Morphological Encoding:** emb-concat
### Model Performance
- **BLEU Score:** 51.75
- **SemScore:** 0.92
### Model Sources
- **Repository:** https://github.com/mrapacz/loreslm-interlinear-translation
- **Paper:** https://aclanthology.org/2025.loreslm-1.11/
## Usage Example
> **Note**: This model uses a modification of T5-family models that includes dedicated embedding layers for encoding morphological information. To load these models, install the [morpht5](https://github.com/mrapacz/loreslm-interlinear-translation/blob/master/morpht5/README.md) package:
> ```bash
> pip install morpht5
> ```
```python
>>> from morpht5 import MorphT5ConcatForConditionalGeneration, MorphT5Tokenizer
>>> text = ['λεγει', 'αυτω', 'ο', 'ιησους', 'εγειρε', 'αρον', 'τον', 'κραβαττον', 'σου', 'και', 'περιπατει']
>>> tags = ['vi Pres Act 3 Sg', 'pp Dat Sg m', 't_ Nom Sg m', 'n_ Nom Sg m', 'vm Pres Act 2 Sg', 'vm Aor Act 2 Sg', 't_ Acc Sg m', 'n_ Acc Sg m', 'pp 2 Gen Sg', 'Conj', 'vm Pres Act 2 Sg']
>>> tokenizer = MorphT5Tokenizer.from_pretrained("mrapacz/interlinear-pl-mt5-large-emb-concat-normalized-ob")
>>> inputs = tokenizer(
text=text,
morph_tags=tags,
return_tensors="pt"
)
>>> model = MorphT5ConcatForConditionalGeneration.from_pretrained("mrapacz/interlinear-pl-mt5-large-emb-concat-normalized-ob")
>>> outputs = model.generate(
**inputs,
max_new_tokens=100,
early_stopping=True,
)
>>> decoded = tokenizer.decode(outputs[0], skip_special_tokens=True, keep_block_separator=True)
>>> decoded = decoded.replace(tokenizer.target_block_separator_token, " | ")
>>> decoded
'Mówi | mu | - | Jezus | wstawaj | weź | - | matę | swoją | i | chodź'
```
## Citation
If you use this model, please cite the following paper:
```
@inproceedings{rapacz-smywinski-pohl-2025-low,
title = "Low-Resource Interlinear Translation: Morphology-Enhanced Neural Models for {A}ncient {G}reek",
author = "Rapacz, Maciej and
Smywi{\'n}ski-Pohl, Aleksander",
editor = "Hettiarachchi, Hansi and
Ranasinghe, Tharindu and
Rayson, Paul and
Mitkov, Ruslan and
Gaber, Mohamed and
Premasiri, Damith and
Tan, Fiona Anting and
Uyangodage, Lasitha",
booktitle = "Proceedings of the First Workshop on Language Models for Low-Resource Languages",
month = jan,
year = "2025",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.loreslm-1.11/",
pages = "145--165",
abstract = "Contemporary machine translation systems prioritize fluent, natural-sounding output with flexible word ordering. In contrast, interlinear translation maintains the source text`s syntactic structure by aligning target language words directly beneath their source counterparts. Despite its importance in classical scholarship, automated approaches to interlinear translation remain understudied. We evaluated neural interlinear translation from Ancient Greek to English and Polish using four transformer-based models: two Ancient Greek-specialized (GreTa and PhilTa) and two general-purpose multilingual models (mT5-base and mT5-large). Our approach introduces novel morphological embedding layers and evaluates text preprocessing and tag set selection across 144 experimental configurations using a word-aligned parallel corpus of the Greek New Testament. Results show that morphological features through dedicated embedding layers significantly enhance translation quality, improving BLEU scores by 35{\%} (44.67 {\textrightarrow} 60.40) for English and 38{\%} (42.92 {\textrightarrow} 59.33) for Polish compared to baseline models. PhilTa achieves state-of-the-art performance for English, while mT5-large does so for Polish. Notably, PhilTa maintains stable performance using only 10{\%} of training data. Our findings challenge the assumption that modern neural architectures cannot benefit from explicit morphological annotations. While preprocessing strategies and tag set selection show minimal impact, the substantial gains from morphological embeddings demonstrate their value in low-resource scenarios."
}
```
|
{"base_model": ["mT5-large"], "datasets": ["mrapacz/greek-interlinear-translations"], "language": ["pl"], "library_name": "transformers", "license": "cc-by-sa-4.0", "metrics": ["bleu"]}
|
task
|
[
"TRANSLATION"
] | 42,332 |
google/paligemma2-3b-pt-448
|
google
|
image-text-to-text
|
[
"transformers",
"safetensors",
"paligemma",
"image-text-to-text",
"arxiv:2407.07726",
"arxiv:2408.00118",
"arxiv:2310.09199",
"arxiv:2303.15343",
"arxiv:1706.03762",
"arxiv:2010.11929",
"arxiv:2412.03555",
"arxiv:2209.06794",
"arxiv:2209.04372",
"arxiv:2103.01913",
"arxiv:1908.04913",
"arxiv:1906.02467",
"arxiv:2203.10244",
"arxiv:2205.12522",
"arxiv:2104.12756",
"arxiv:1608.00272",
"arxiv:1511.02283",
"arxiv:1905.13648",
"arxiv:2110.11624",
"arxiv:2108.03353",
"arxiv:1810.12440",
"arxiv:1904.03493",
"arxiv:2010.04295",
"arxiv:1511.09207",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-11-21T20:08:09Z |
2024-12-05T12:45:16+00:00
| 8,063 | 43 |
---
library_name: transformers
license: gemma
pipeline_tag: image-text-to-text
extra_gated_heading: Access PaliGemma on Hugging Face
extra_gated_prompt: To access PaliGemma on Hugging Face, you’re required to review
and agree to Google’s usage license. To do this, please ensure you’re logged-in
to Hugging Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# PaliGemma 2 model card
**Model page:** [PaliGemma](https://ai.google.dev/gemma/docs/paligemma)
Transformers PaliGemma 2 3B weights, pre-trained with 448*448 input images and 512 token input/output text sequences.
The model is available the `bfloat16` format for fine-tuning.
**Resources and technical documentation:**
* [PaliGemma 2 on Kaggle](https://www.kaggle.com/models/google/paligemma-2)
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
**Terms of Use:** [Terms](https://ai.google.dev/gemma/terms)
**Authors:** Google
## Model information
### Model summary
PaliGemma 2 is an update of the [PaliGemma](https://arxiv.org/abs/2407.07726)
vision-language model (VLM) which incorporates the capabilities of the
[Gemma 2](https://arxiv.org/abs/2408.00118) models. The PaliGemma family of
models is inspired by [PaLI-3](https://arxiv.org/abs/2310.09199) and based on
open components such as the [SigLIP](https://arxiv.org/abs/2303.15343) vision
model and [Gemma 2](https://arxiv.org/abs/2408.00118) language models. It takes
both image and text as input and generates text as output, supporting multiple
languages. It is designed for class-leading fine-tune performance on a wide
range of vision-language tasks such as image and short video caption, visual
question answering, text reading, object detection and object segmentation.
#### Model architecture
PaliGemma 2 is the composition of a
[Transformer decoder](https://arxiv.org/abs/1706.03762) and a
[Vision Transformer image encoder](https://arxiv.org/abs/2010.11929).
The text decoder is initialized from
[Gemma 2](https://ai.google.dev/gemma/docs/base) in the 2B, 9B, and 27B
parameter sizes. The image encoder is initialized from
[SigLIP-So400m/14](https://colab.research.google.com/github/google-research/big_vision/blob/main/big_vision/configs/proj/image_text/SigLIP_demo.ipynb).
Similar to the original PaliGemma model, PaliGemma 2 is trained following the
[PaLI-3](https://arxiv.org/abs/2310.09199) recipes.
#### Inputs and outputs
* **Input:** Image and text string, such as a prompt to caption the image, or
a question.
* **Output:** Generated text in response to the input, such as a caption of
the image, an answer to a question, a list of object bounding box
coordinates, or segmentation codewords.
#### Citation
```none
@article{
title={PaliGemma 2: A Family of Versatile VLMs for Transfer},
author={Andreas Steiner and André Susano Pinto and Michael Tschannen and Daniel Keysers and Xiao Wang and Yonatan Bitton and Alexey Gritsenko and Matthias Minderer and Anthony Sherbondy and Shangbang Long and Siyang Qin and Reeve Ingle and Emanuele Bugliarello and Sahar Kazemzadeh and Thomas Mesnard and Ibrahim Alabdulmohsin and Lucas Beyer and Xiaohua Zhai},
year={2024},
journal={arXiv preprint arXiv:2412.03555}
}
```
### Model data
#### Pre-train datasets
PaliGemma 2 is pre-trained on the following mixture of datasets:
* **WebLI:** [WebLI (Web Language Image)](https://arxiv.org/abs/2209.06794) is
a web-scale multilingual image-text dataset built from the public web. A
wide range of WebLI splits are used to acquire versatile model capabilities,
such as visual semantic understanding, object localization,
visually-situated text understanding, and multilinguality.
* **CC3M-35L:** Curated English image-alt_text pairs from webpages
([Sharma et al., 2018](https://aclanthology.org/P18-1238/)). We used the
[Google Cloud Translation API](https://cloud.google.com/translate) to
translate into 34 additional languages.
* **VQ²A-CC3M-35L/VQG-CC3M-35L:** A subset of VQ2A-CC3M
([Changpinyo et al., 2022a](https://aclanthology.org/2022.naacl-main.142/)),
translated into the same additional 34 languages as CC3M-35L, using the
[Google Cloud Translation API](https://cloud.google.com/translate).
* **OpenImages:** Detection and object-aware questions and answers
([Piergiovanni et al. 2022](https://arxiv.org/abs/2209.04372)) generated by
handcrafted rules on the [OpenImages dataset].
* **WIT:** Images and texts collected from Wikipedia
([Srinivasan et al., 2021](https://arxiv.org/abs/2103.01913)).
[OpenImages dataset]: https://storage.googleapis.com/openimages/web/factsfigures_v7.html
PaliGemma 2 is based on Gemma 2, and you can find information on the
pre-training datasets for Gemma 2 in the
[Gemma 2 model card](https://ai.google.dev/gemma/docs/model_card_2).
#### Data responsibility filtering
The following filters are applied to WebLI, with the goal of training PaliGemma
2 on safe and responsible data:
* **Pornographic image filtering:** This filter removes images deemed to be of
pornographic nature.
* **Text safety filtering:** We identify and filter out images that are paired
with unsafe text. Unsafe text is any text deemed to contain or be about
child sexual abuse imagery (CSAI), pornography, vulgarities, or is otherwise
offensive.
* **Text toxicity filtering:** We further use the [Perspective
API](https://perspectiveapi.com/) to identify and filter out images that are
paired with text deemed insulting, obscene, hateful or otherwise toxic.
* **Text personal information filtering:** We filtered certain personal
information and other sensitive data using the [Cloud Data Loss Prevention
(DLP) API](https://cloud.google.com/security/products/dlp) to protect the
privacy of individuals. Identifiers such as social security numbers and
[other sensitive information types] were removed.
* **Additional methods:** Filtering based on content quality and safety in
line with our policies and practices.
[other sensitive information types]: https://cloud.google.com/sensitive-data-protection/docs/high-sensitivity-infotypes-reference?_gl=1*jg604m*_ga*ODk5MzA3ODQyLjE3MTAzMzQ3NTk.*_ga_WH2QY8WWF5*MTcxMDUxNTkxMS4yLjEuMTcxMDUxNjA2NC4wLjAuMA..&_ga=2.172110058.-899307842.1710334759
## Use in Transformers
The following snippet uses model `google/paligemma2-3b-pt-448` for reference purposes.
It is a base model and is recommended to use after fine tuning it on a downstream task.
Here is a [notebook](https://github.com/merveenoyan/smol-vision/blob/main/Fine_tune_PaliGemma.ipynb)
that showcases fine-tuning PaliGemma 2.
```python
from transformers import (
PaliGemmaProcessor,
PaliGemmaForConditionalGeneration,
)
from transformers.image_utils import load_image
import torch
model_id = "google/paligemma2-3b-pt-448"
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg"
image = load_image(url)
model = PaliGemmaForConditionalGeneration.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto").eval()
processor = PaliGemmaProcessor.from_pretrained(model_id)
# Leaving the prompt blank for pre-trained models
prompt = ""
model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(torch.bfloat16).to(model.device)
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
```
## Implementation information
### Hardware
PaliGemma 2 was trained using the latest generation of Tensor Processing Unit
(TPU) hardware (TPUv5e).
### Software
Training was completed using [JAX](https://github.com/google/jax),
[Flax](https://github.com/google/flax),
[TFDS](https://github.com/tensorflow/datasets) and
[`big_vision`](https://github.com/google-research/big_vision).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
TFDS is used to access datasets and Flax is used for model architecture. The
PaliGemma 2 fine-tune code and inference code are released in the `big_vision`
GitHub repository.
## Evaluation information
### Benchmark results
In order to verify the transferability of PaliGemma 2 to a wide variety of
academic tasks, we fine-tune the pretrained models on each task. We report results on
different resolutions to provide an impression of which tasks benefit from
increased resolution. Importantly, none of these tasks or datasets are part of
the pretraining data mixture, and their images are explicitly removed from the
web-scale pre-training data.
#### PaliGemma 2 results by model resolution and size
| Benchmark | 224-3B | 224-10B | 224-28B | 448-3B | 448-10B | 448-28B |
|-------------------------------|:------:|:-------:|:-------:|:------:|:-------:|:-------:|
| [AI2D][ai2d] | 74.7 | 83.1 | 83.2 | 76.0 | 84.4 | 84.6 |
| [AOKVQA-DA][aokvqa-da] (val) | 64.2 | 68.9 | 70.2 | 67.9 | 70.8 | 71.2 |
| [AOKVQA-MC][aokvqa-mc] (val) | 79.7 | 83.7 | 84.7 | 82.5 | 85.9 | 87.0 |
| [ActivityNet-CAP][anet-cap] | 34.2 | 35.9 | - | - | - | - |
| [ActivityNet-QA][anet-qa] | 51.3 | 53.2 | - | - | - | - |
| [COCO-35L][coco-35l] (avg34) | 113.9 | 115.8 | 116.5 | 115.8 | 117.2 | 117.2 |
| [COCO-35L][coco-35l] (en) | 138.4 | 140.8 | 142.4 | 140.4 | 142.4 | 142.3 |
| [COCOcap][coco-cap] | 141.3 | 143.7 | 144.0 | 143.4 | 145.0 | 145.2 |
| [ChartQA][chartqa] (aug) | 74.4 | 74.2 | 68.9 | 89.2 | 90.1 | 85.1 |
| [ChartQA][chartqa] (human) | 42.0 | 48.4 | 46.8 | 54.0 | 66.4 | 61.3 |
| [CountBenchQA][countbenchqa] | 81.0 | 84.0 | 86.4 | 82.0 | 85.3 | 87.4 |
| [DocVQA][docvqa] (val) | 39.9 | 43.9 | 44.9 | 73.6 | 76.6 | 76.1 |
| [GQA][gqa] | 66.2 | 67.2 | 67.3 | 68.1 | 68.3 | 68.3 |
| [InfoVQA][info-vqa] (val) | 25.2 | 33.6 | 36.4 | 37.5 | 47.8 | 46.7 |
| [MARVL][marvl] (avg5) | 83.5 | 89.5 | 90.6 | 82.7 | 89.1 | 89.7 |
| [MSRVTT-CAP][msrvtt] | 68.5 | 72.1 | - | - | - | - |
| [MSRVTT-QA][msrvtt] | 50.5 | 51.9 | - | - | - | - |
| [MSVD-QA][msvd-qa] | 61.1 | 62.5 | - | - | - | - |
| [NLVR2][nlvr2] | 91.4 | 93.9 | 94.2 | 91.6 | 93.7 | 94.1 |
| [NoCaps][nocaps] | 123.1 | 126.3 | 127.1 | 123.5 | 126.9 | 127.0 |
| [OCR-VQA][ocr-vqa] | 73.4 | 74.7 | 75.3 | 75.7 | 76.3 | 76.6 |
| [OKVQA][okvqa] | 64.2 | 68.0 | 71.2 | 64.1 | 68.6 | 70.6 |
| [RSVQA-hr][rsvqa-hr] (test) | 92.7 | 92.6 | 92.7 | 92.8 | 92.8 | 92.8 |
| [RSVQA-hr][rsvqa-hr] (test2) | 90.9 | 90.8 | 90.9 | 90.7 | 90.7 | 90.8 |
| [RSVQA-lr][rsvqa-lr] | 93.0 | 92.8 | 93.5 | 92.7 | 93.1 | 93.7 |
| [RefCOCO][refcoco] (testA) | 75.7 | 77.2 | 76.8 | 78.6 | 79.7 | 79.3 |
| [RefCOCO][refcoco] (testB) | 71.0 | 74.2 | 73.9 | 73.5 | 76.2 | 74.8 |
| [RefCOCO][refcoco] (val) | 73.4 | 75.9 | 75.0 | 76.3 | 78.2 | 77.3 |
| [RefCOCO+][refcoco+] (testA) | 72.7 | 74.7 | 73.6 | 76.1 | 77.7 | 76.6 |
| [RefCOCO+][refcoco+] (testB) | 64.2 | 68.4 | 67.1 | 67.0 | 71.1 | 68.6 |
| [RefCOCO+][refcoco+] (val) | 68.6 | 72.0 | 70.3 | 72.1 | 74.4 | 72.8 |
| [RefCOCOg][refcocog] (test) | 69.0 | 71.9 | 70.7 | 72.7 | 74.8 | 73.7 |
| [RefCOCOg][refcocog] (val) | 68.3 | 71.4 | 70.5 | 72.3 | 74.4 | 73.0 |
| [ST-VQA][st-vqa] (val) | 61.9 | 64.3 | 65.1 | 80.5 | 82.0 | 81.8 |
| [SciCap][scicap] | 165.1 | 159.5 | 156.9 | 183.3 | 177.2 | 172.7 |
| [ScienceQA][scienceqa] | 96.1 | 98.2 | 98.2 | 96.2 | 98.5 | 98.6 |
| [Screen2Words][screen2words] | 113.3 | 117.8 | 122.8 | 114.0 | 119.1 | 123.4 |
| [TallyQA][tallyqa] (complex) | 70.3 | 73.4 | 74.2 | 73.6 | 76.7 | 76.8 |
| [TallyQA][tallyqa] (simple) | 81.8 | 83.2 | 83.4 | 85.3 | 86.2 | 85.7 |
| [TextCaps][textcaps] | 127.5 | 137.9 | 139.9 | 152.1 | 157.7 | 153.6 |
| [TextVQA][textvqa] (val) | 59.6 | 64.0 | 64.7 | 75.2 | 76.6 | 76.2 |
| [VATEX][vatex] | 80.8 | 82.7 | - | - | - | - |
| [VQAv2][vqav2] (minival) | 83.0 | 84.3 | 84.5 | 84.8 | 85.8 | 85.8 |
| [VizWizVQA][vizwiz-vqa] (val) | 76.4 | 78.1 | 78.7 | 77.5 | 78.6 | 78.9 |
| [WidgetCap][widgetcap] | 138.1 | 139.8 | 138.8 | 151.4 | 151.9 | 148.9 |
| [XM3600][xm3600] (avg35) | 42.8 | 44.5 | 45.2 | 43.2 | 44.6 | 45.2 |
| [XM3600][xm3600] (en) | 79.8 | 80.7 | 81.0 | 80.3 | 81.5 | 81.0 |
| [xGQA][xgqa] (avg7) | 58.6 | 61.4 | 61.1 | 60.4 | 62.6 | 62.1 |
#### Additional Benchmarks
**[ICDAR 2015 Incidental][icdar2015-inc]**
| Model | Precision | Recall | F1 |
|-----------------|-----------|:------:|:-----:|
| PaliGemma 2 3B | 81.88 | 70.73 | 75.9 |
**[Total-Text][total-text]**
| Model | Precision | Recall | F1 |
|-----------------|-----------|:------:|:-----:|
| PaliGemma 2 3B | 73.8. | 74.54 | 74.17 |
**[FinTabNet][fintabnet]**
| Model | S-TEDS | TEDS | GriTS-Top | GriTS-Con |
|-----------------|--------|-------|-----------|-----------|
| PaliGemma 2 3B | 99.18 | 98.94 | 99.43 | 99.21 |
**[PubTabNet][pubtabnet]**
| Model | S-TEDS | TEDS | GriTS-Top | GriTS-Con |
|-----------------|--------|-------|-----------|-----------|
| PaliGemma 2 3B | 97.6 | 97.31 | 97.99 | 97.84 |
**[GrandStaff][grandstaff]**
| Model | CER | LER | SER |
|-----------------|-----|-----|-----|
| PaliGemma 2 3B | 1.6 | 6.7 | 2.3 |
**[PubChem][pubchem]**
* PaliGemma 2 3B, Full Match: 94.8
**[DOCCI][docci]**
| Model | avg#char | avg#sent | NES % |
|-----------------|----------|----------|---------|
| PaliGemma 2 3B | 529 | 7.74 | 28.42 |
| PaliGemma 2 10B | 521 | 7.45 | 20.27 |
- *avg#char*: Average number of characters
- *avg#sent*: Average number of sentences
- *NES*: Non entailment sentences
**[MIMIC-CXR][mimic-cxr]**
| Model | CIDEr | BLEU4 | Rouge-L | RadGraph F1 |
|-----------------|-------|-------|---------|-------------|
| PaliGemma 2 3B | 19.9% | 14.6% | 31.92% | 28.8% |
| PaliGemma 2 10B | 17.4% | 15% | 32.41% | 29.5% |
**[Visual Spatial Reasoning][vsr]**
| Model | VSR zeroshot split (test) | VSR random split (test) |
|-----------------|---------------------------|--------------------------|
| PaliGemma 2 3B | 0.75 | 0.82 |
| PaliGemma 2 10B | 0.80 | 0.87 |
## Ethics and safety
### Evaluation approach
Our evaluation methods include structured ethics and safety evaluations across
relevant content policies, including:
* Human evaluation on prompts covering child safety, content safety and
representational harms. See the [Gemma model
card](https://ai.google.dev/gemma/docs/model_card#evaluation_approach) for
more details on evaluation approach, but with image captioning and visual
question answering setups.
* Image-to-Text benchmark evaluation: Benchmark against relevant academic
datasets such as FairFace Dataset ([Karkkainen et al.,
2021](https://arxiv.org/abs/1908.04913)).
### Evaluation results
* The human evaluation results of ethics and safety evaluations are within
acceptable thresholds for meeting [internal
policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11)
for categories such as child safety, content safety and representational
harms.
* On top of robust internal evaluations, we also use the Perspective API
(threshold of 0.8) to measure toxicity, profanity, and other potential
issues in the generated captions for images sourced from the FairFace
dataset. We report the maximum and median values observed across subgroups
for each of the perceived gender, ethnicity, and age attributes.
<table>
<tr>
<col>
<colgroup span="3"></colgroup>
<colgroup span="3"></colgroup>
<colgroup span="3"></colgroup>
<th>Metric</th>
<th colspan="3" scope="colgroup">Perceived gender</th>
<th colspan="3" scope="colgroup">Ethnicity</th>
<th colspan="3" scope="colgroup">Age group</th>
</tr>
<tr>
<th>Model size</th>
<th scope="col">3B</th>
<th scope="col">10B</th>
<th scope="col">28B</th>
<th scope="col">3B</th>
<th scope="col">10B</th>
<th scope="col">28B</th>
<th scope="col">3B</th>
<th scope="col">10B</th>
<th scope="col">28B</th>
</tr>
<tr>
<th></th>
<th colspan="9" scope="colgroup">Maximum</th>
</tr>
<tr>
<td>Toxicity</td>
<td>0.14%</td>
<td>0.15%</td>
<td>0.19%</td>
<td>0.29%</td>
<td>0.39%</td>
<td>0.39%</td>
<td>0.26%</td>
<td>0.18%</td>
<td>0.32%</td>
</tr>
<tr>
<td>Identity Attack</td>
<td>0.04%</td>
<td>0.02%</td>
<td>0.02%</td>
<td>0.13%</td>
<td>0.06%</td>
<td>0.06%</td>
<td>0.06%</td>
<td>0.03%</td>
<td>0.06%</td>
</tr>
<tr>
<td>Insult</td>
<td>0.17%</td>
<td>0.25%</td>
<td>0.17%</td>
<td>0.37%</td>
<td>0.52%</td>
<td>0.52%</td>
<td>0.27%</td>
<td>0.39%</td>
<td>0.24%</td>
</tr>
<tr>
<td>Threat</td>
<td>0.55%</td>
<td>0.43%</td>
<td>0.57%</td>
<td>0.83%</td>
<td>0.48%</td>
<td>0.48%</td>
<td>0.64%</td>
<td>0.43%</td>
<td>0.64%</td>
</tr>
<tr>
<td>Profanity</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
</tr>
<tr>
<th></th>
<th colspan="9" scope="colgroup">Median</th>
</tr>
<tr>
<td>Toxicity</td>
<td>0.13%</td>
<td>0.10%</td>
<td>0.18%</td>
<td>0.07%</td>
<td>0.07%</td>
<td>0.14%</td>
<td>0.12%</td>
<td>0.08%</td>
<td>0.12%</td>
</tr>
<tr>
<td>Identity Attack</td>
<td>0.02%</td>
<td>0.01%</td>
<td>0.02%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Insult</td>
<td>0.15%</td>
<td>0.23%</td>
<td>0.14%</td>
<td>0.14%</td>
<td>0.17%</td>
<td>0.13%</td>
<td>0.09%</td>
<td>0.18%</td>
<td>0.16%</td>
</tr>
<tr>
<td>Threat</td>
<td>0.35%</td>
<td>0.27%</td>
<td>0.41%</td>
<td>0.28%</td>
<td>0.19%</td>
<td>0.42%</td>
<td>0.27%</td>
<td>0.31%</td>
<td>0.40%</td>
</tr>
<tr>
<td>Profanity</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
</tr>
</table>
## Usage and limitations
### Intended usage
Open Vision Language Models (VLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
Fine-tune on specific vision-language task:
* The pre-trained models can be fine-tuned on a wide range of vision-language
tasks such as: image captioning, short video caption, visual question
answering, text reading, object detection and object segmentation.
* The pre-trained models can be fine-tuned for specific domains such as remote
sensing question answering, visual questions from people who are blind,
science question answering, describe UI element functionalities.
* The pre-trained models can be fine-tuned for tasks with non-textual outputs
such as bounding boxes or segmentation masks.
Vision-language research:
* The pre-trained models and fine-tuned models can serve as a foundation for
researchers to experiment with VLM techniques, develop algorithms, and
contribute to the advancement of the field.
### Ethical considerations and risks
The development of vision-language models (VLMs) raises several ethical
concerns. In creating an open model, we have carefully considered the following:
* Bias and Fairness
* VLMs trained on large-scale, real-world image-text data can reflect
socio-cultural biases embedded in the training material. These models
underwent careful scrutiny, input data pre-processing described and
posterior evaluations reported in this card.
* Misinformation and Misuse
* VLMs can be misused to generate text that is false, misleading, or
harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](https://ai.google.dev/responsible).
* Transparency and Accountability
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making VLM technology accessible to developers and
researchers across the AI ecosystem.
Risks identified and mitigations:
* **Perpetuation of biases:** It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* **Generation of harmful content:** Mechanisms and guidelines for content
safety are essential. Developers are encouraged to exercise caution and
implement appropriate content safety safeguards based on their specific
product policies and application use cases.
* **Misuse for malicious purposes:** Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided: see the [Responsible Generative AI Toolkit](https://ai.google.dev/responsible).
Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* **Privacy violations:** Models were trained on data filtered to remove
certain personal information and sensitive data. Developers are encouraged
to adhere to privacy regulations with privacy-preserving techniques.
### Limitations
* Most limitations inherited from the underlying Gemma 2 models still apply:
* VLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* Natural language is inherently complex. VLMs might struggle to grasp
subtle nuances, sarcasm, or figurative language.
* VLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* VLMs rely on statistical patterns in language and images. They might
lack the ability to apply common sense reasoning in certain situations.
* PaliGemma 2 was designed first and foremost to serve as a general
pre-trained model for fine-tuning to specialized tasks. Hence, its "out of
the box" or "zero-shot" performance might lag behind models designed
specifically for general purpose use.
* PaliGemma 2 is not a multi-turn chatbot. It is designed for a single round
of image and text input.
[ai2d]: https://allenai.org/data/diagrams
[aokvqa-da]: https://allenai.org/project/a-okvqa/home
[aokvqa-mc]: https://allenai.org/project/a-okvqa/home
[anet-cap]: https://paperswithcode.com/dataset/activitynet-captions
[anet-qa]: https://arxiv.org/abs/1906.02467
[chartqa]: https://arxiv.org/abs/2203.10244
[coco-35l]: https://arxiv.org/pdf/2205.12522
[coco-cap]: https://cocodataset.org/#home
[countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/
[docvqa]: https://www.docvqa.org/
[gqa]: https://cs.stanford.edu/people/dorarad/gqa/about.html
[info-vqa]: https://arxiv.org/abs/2104.12756
[marvl]: https://marvl-challenge.github.io/
[msrvtt]: https://paperswithcode.com/dataset/msr-vtt
[msvd-qa]: https://paperswithcode.com/dataset/msvd-qa
[nlvr2]: https://lil.nlp.cornell.edu/nlvr/
[nocaps]: https://nocaps.org/
[ocr-vqa]: https://ocr-vqa.github.io/
[okvqa]: https://okvqa.allenai.org/
[refcoco]: https://arxiv.org/abs/1608.00272
[refcoco+]: https://aclanthology.org/D14-1086
[refcocog]: https://arxiv.org/abs/1511.02283
[rsvqa-hr]: https://zenodo.org/records/6344367
[rsvqa-lr]: https://zenodo.org/records/6344334
[st-vqa]: https://arxiv.org/abs/1905.13648
[scicap]: https://arxiv.org/abs/2110.11624
[scienceqa]: https://scienceqa.github.io/
[screen2words]: https://arxiv.org/abs/2108.03353
[tallyqa]: https://arxiv.org/abs/1810.12440
[textcaps]: https://textvqa.org/textcaps/
[textvqa]: https://textvqa.org/
[vatex]: https://arxiv.org/abs/1904.03493
[vizwiz-vqa]: https://vizwiz.org/tasks-and-datasets/vqa/
[widgetcap]: https://arxiv.org/abs/2010.04295
[vqav2]: https://visualqa.org/index.html
[xgqa]: https://aclanthology.org/2022.findings-acl.196/
[xm3600]: https://arxiv.org/pdf/2205.12522
[icdar2015-inc]: https://arxiv.org/abs/1511.09207
[total-text]: https://paperswithcode.com/paper/total-text-a-comprehensive-dataset-for-scene
[fintabnet]: https://developer.ibm.com/data/fintabnet/
[pubtabnet]: https://paperswithcode.com/dataset/pubtabnet
[grandstaff]: https://link.springer.com/article/10.1007/s10032-023-00432-z
[pubchem]: https://pmc.ncbi.nlm.nih.gov/articles/PMC7352161/
[docci]: https://research.google/pubs/docci-descriptions-of-connected-and-contrasting-images/
[mimic-cxr]: https://paperswithcode.com/dataset/mimic-cxr
[vsr]: https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00566/116470/Visual-Spatial-Reasoning
| null |
Non_BioNLP
|
# PaliGemma 2 model card
**Model page:** [PaliGemma](https://ai.google.dev/gemma/docs/paligemma)
Transformers PaliGemma 2 3B weights, pre-trained with 448*448 input images and 512 token input/output text sequences.
The model is available the `bfloat16` format for fine-tuning.
**Resources and technical documentation:**
* [PaliGemma 2 on Kaggle](https://www.kaggle.com/models/google/paligemma-2)
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
**Terms of Use:** [Terms](https://ai.google.dev/gemma/terms)
**Authors:** Google
## Model information
### Model summary
PaliGemma 2 is an update of the [PaliGemma](https://arxiv.org/abs/2407.07726)
vision-language model (VLM) which incorporates the capabilities of the
[Gemma 2](https://arxiv.org/abs/2408.00118) models. The PaliGemma family of
models is inspired by [PaLI-3](https://arxiv.org/abs/2310.09199) and based on
open components such as the [SigLIP](https://arxiv.org/abs/2303.15343) vision
model and [Gemma 2](https://arxiv.org/abs/2408.00118) language models. It takes
both image and text as input and generates text as output, supporting multiple
languages. It is designed for class-leading fine-tune performance on a wide
range of vision-language tasks such as image and short video caption, visual
question answering, text reading, object detection and object segmentation.
#### Model architecture
PaliGemma 2 is the composition of a
[Transformer decoder](https://arxiv.org/abs/1706.03762) and a
[Vision Transformer image encoder](https://arxiv.org/abs/2010.11929).
The text decoder is initialized from
[Gemma 2](https://ai.google.dev/gemma/docs/base) in the 2B, 9B, and 27B
parameter sizes. The image encoder is initialized from
[SigLIP-So400m/14](https://colab.research.google.com/github/google-research/big_vision/blob/main/big_vision/configs/proj/image_text/SigLIP_demo.ipynb).
Similar to the original PaliGemma model, PaliGemma 2 is trained following the
[PaLI-3](https://arxiv.org/abs/2310.09199) recipes.
#### Inputs and outputs
* **Input:** Image and text string, such as a prompt to caption the image, or
a question.
* **Output:** Generated text in response to the input, such as a caption of
the image, an answer to a question, a list of object bounding box
coordinates, or segmentation codewords.
#### Citation
```none
@article{
title={PaliGemma 2: A Family of Versatile VLMs for Transfer},
author={Andreas Steiner and André Susano Pinto and Michael Tschannen and Daniel Keysers and Xiao Wang and Yonatan Bitton and Alexey Gritsenko and Matthias Minderer and Anthony Sherbondy and Shangbang Long and Siyang Qin and Reeve Ingle and Emanuele Bugliarello and Sahar Kazemzadeh and Thomas Mesnard and Ibrahim Alabdulmohsin and Lucas Beyer and Xiaohua Zhai},
year={2024},
journal={arXiv preprint arXiv:2412.03555}
}
```
### Model data
#### Pre-train datasets
PaliGemma 2 is pre-trained on the following mixture of datasets:
* **WebLI:** [WebLI (Web Language Image)](https://arxiv.org/abs/2209.06794) is
a web-scale multilingual image-text dataset built from the public web. A
wide range of WebLI splits are used to acquire versatile model capabilities,
such as visual semantic understanding, object localization,
visually-situated text understanding, and multilinguality.
* **CC3M-35L:** Curated English image-alt_text pairs from webpages
([Sharma et al., 2018](https://aclanthology.org/P18-1238/)). We used the
[Google Cloud Translation API](https://cloud.google.com/translate) to
translate into 34 additional languages.
* **VQ²A-CC3M-35L/VQG-CC3M-35L:** A subset of VQ2A-CC3M
([Changpinyo et al., 2022a](https://aclanthology.org/2022.naacl-main.142/)),
translated into the same additional 34 languages as CC3M-35L, using the
[Google Cloud Translation API](https://cloud.google.com/translate).
* **OpenImages:** Detection and object-aware questions and answers
([Piergiovanni et al. 2022](https://arxiv.org/abs/2209.04372)) generated by
handcrafted rules on the [OpenImages dataset].
* **WIT:** Images and texts collected from Wikipedia
([Srinivasan et al., 2021](https://arxiv.org/abs/2103.01913)).
[OpenImages dataset]: https://storage.googleapis.com/openimages/web/factsfigures_v7.html
PaliGemma 2 is based on Gemma 2, and you can find information on the
pre-training datasets for Gemma 2 in the
[Gemma 2 model card](https://ai.google.dev/gemma/docs/model_card_2).
#### Data responsibility filtering
The following filters are applied to WebLI, with the goal of training PaliGemma
2 on safe and responsible data:
* **Pornographic image filtering:** This filter removes images deemed to be of
pornographic nature.
* **Text safety filtering:** We identify and filter out images that are paired
with unsafe text. Unsafe text is any text deemed to contain or be about
child sexual abuse imagery (CSAI), pornography, vulgarities, or is otherwise
offensive.
* **Text toxicity filtering:** We further use the [Perspective
API](https://perspectiveapi.com/) to identify and filter out images that are
paired with text deemed insulting, obscene, hateful or otherwise toxic.
* **Text personal information filtering:** We filtered certain personal
information and other sensitive data using the [Cloud Data Loss Prevention
(DLP) API](https://cloud.google.com/security/products/dlp) to protect the
privacy of individuals. Identifiers such as social security numbers and
[other sensitive information types] were removed.
* **Additional methods:** Filtering based on content quality and safety in
line with our policies and practices.
[other sensitive information types]: https://cloud.google.com/sensitive-data-protection/docs/high-sensitivity-infotypes-reference?_gl=1*jg604m*_ga*ODk5MzA3ODQyLjE3MTAzMzQ3NTk.*_ga_WH2QY8WWF5*MTcxMDUxNTkxMS4yLjEuMTcxMDUxNjA2NC4wLjAuMA..&_ga=2.172110058.-899307842.1710334759
## Use in Transformers
The following snippet uses model `google/paligemma2-3b-pt-448` for reference purposes.
It is a base model and is recommended to use after fine tuning it on a downstream task.
Here is a [notebook](https://github.com/merveenoyan/smol-vision/blob/main/Fine_tune_PaliGemma.ipynb)
that showcases fine-tuning PaliGemma 2.
```python
from transformers import (
PaliGemmaProcessor,
PaliGemmaForConditionalGeneration,
)
from transformers.image_utils import load_image
import torch
model_id = "google/paligemma2-3b-pt-448"
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg"
image = load_image(url)
model = PaliGemmaForConditionalGeneration.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto").eval()
processor = PaliGemmaProcessor.from_pretrained(model_id)
# Leaving the prompt blank for pre-trained models
prompt = ""
model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(torch.bfloat16).to(model.device)
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
```
## Implementation information
### Hardware
PaliGemma 2 was trained using the latest generation of Tensor Processing Unit
(TPU) hardware (TPUv5e).
### Software
Training was completed using [JAX](https://github.com/google/jax),
[Flax](https://github.com/google/flax),
[TFDS](https://github.com/tensorflow/datasets) and
[`big_vision`](https://github.com/google-research/big_vision).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
TFDS is used to access datasets and Flax is used for model architecture. The
PaliGemma 2 fine-tune code and inference code are released in the `big_vision`
GitHub repository.
## Evaluation information
### Benchmark results
In order to verify the transferability of PaliGemma 2 to a wide variety of
academic tasks, we fine-tune the pretrained models on each task. We report results on
different resolutions to provide an impression of which tasks benefit from
increased resolution. Importantly, none of these tasks or datasets are part of
the pretraining data mixture, and their images are explicitly removed from the
web-scale pre-training data.
#### PaliGemma 2 results by model resolution and size
| Benchmark | 224-3B | 224-10B | 224-28B | 448-3B | 448-10B | 448-28B |
|-------------------------------|:------:|:-------:|:-------:|:------:|:-------:|:-------:|
| [AI2D][ai2d] | 74.7 | 83.1 | 83.2 | 76.0 | 84.4 | 84.6 |
| [AOKVQA-DA][aokvqa-da] (val) | 64.2 | 68.9 | 70.2 | 67.9 | 70.8 | 71.2 |
| [AOKVQA-MC][aokvqa-mc] (val) | 79.7 | 83.7 | 84.7 | 82.5 | 85.9 | 87.0 |
| [ActivityNet-CAP][anet-cap] | 34.2 | 35.9 | - | - | - | - |
| [ActivityNet-QA][anet-qa] | 51.3 | 53.2 | - | - | - | - |
| [COCO-35L][coco-35l] (avg34) | 113.9 | 115.8 | 116.5 | 115.8 | 117.2 | 117.2 |
| [COCO-35L][coco-35l] (en) | 138.4 | 140.8 | 142.4 | 140.4 | 142.4 | 142.3 |
| [COCOcap][coco-cap] | 141.3 | 143.7 | 144.0 | 143.4 | 145.0 | 145.2 |
| [ChartQA][chartqa] (aug) | 74.4 | 74.2 | 68.9 | 89.2 | 90.1 | 85.1 |
| [ChartQA][chartqa] (human) | 42.0 | 48.4 | 46.8 | 54.0 | 66.4 | 61.3 |
| [CountBenchQA][countbenchqa] | 81.0 | 84.0 | 86.4 | 82.0 | 85.3 | 87.4 |
| [DocVQA][docvqa] (val) | 39.9 | 43.9 | 44.9 | 73.6 | 76.6 | 76.1 |
| [GQA][gqa] | 66.2 | 67.2 | 67.3 | 68.1 | 68.3 | 68.3 |
| [InfoVQA][info-vqa] (val) | 25.2 | 33.6 | 36.4 | 37.5 | 47.8 | 46.7 |
| [MARVL][marvl] (avg5) | 83.5 | 89.5 | 90.6 | 82.7 | 89.1 | 89.7 |
| [MSRVTT-CAP][msrvtt] | 68.5 | 72.1 | - | - | - | - |
| [MSRVTT-QA][msrvtt] | 50.5 | 51.9 | - | - | - | - |
| [MSVD-QA][msvd-qa] | 61.1 | 62.5 | - | - | - | - |
| [NLVR2][nlvr2] | 91.4 | 93.9 | 94.2 | 91.6 | 93.7 | 94.1 |
| [NoCaps][nocaps] | 123.1 | 126.3 | 127.1 | 123.5 | 126.9 | 127.0 |
| [OCR-VQA][ocr-vqa] | 73.4 | 74.7 | 75.3 | 75.7 | 76.3 | 76.6 |
| [OKVQA][okvqa] | 64.2 | 68.0 | 71.2 | 64.1 | 68.6 | 70.6 |
| [RSVQA-hr][rsvqa-hr] (test) | 92.7 | 92.6 | 92.7 | 92.8 | 92.8 | 92.8 |
| [RSVQA-hr][rsvqa-hr] (test2) | 90.9 | 90.8 | 90.9 | 90.7 | 90.7 | 90.8 |
| [RSVQA-lr][rsvqa-lr] | 93.0 | 92.8 | 93.5 | 92.7 | 93.1 | 93.7 |
| [RefCOCO][refcoco] (testA) | 75.7 | 77.2 | 76.8 | 78.6 | 79.7 | 79.3 |
| [RefCOCO][refcoco] (testB) | 71.0 | 74.2 | 73.9 | 73.5 | 76.2 | 74.8 |
| [RefCOCO][refcoco] (val) | 73.4 | 75.9 | 75.0 | 76.3 | 78.2 | 77.3 |
| [RefCOCO+][refcoco+] (testA) | 72.7 | 74.7 | 73.6 | 76.1 | 77.7 | 76.6 |
| [RefCOCO+][refcoco+] (testB) | 64.2 | 68.4 | 67.1 | 67.0 | 71.1 | 68.6 |
| [RefCOCO+][refcoco+] (val) | 68.6 | 72.0 | 70.3 | 72.1 | 74.4 | 72.8 |
| [RefCOCOg][refcocog] (test) | 69.0 | 71.9 | 70.7 | 72.7 | 74.8 | 73.7 |
| [RefCOCOg][refcocog] (val) | 68.3 | 71.4 | 70.5 | 72.3 | 74.4 | 73.0 |
| [ST-VQA][st-vqa] (val) | 61.9 | 64.3 | 65.1 | 80.5 | 82.0 | 81.8 |
| [SciCap][scicap] | 165.1 | 159.5 | 156.9 | 183.3 | 177.2 | 172.7 |
| [ScienceQA][scienceqa] | 96.1 | 98.2 | 98.2 | 96.2 | 98.5 | 98.6 |
| [Screen2Words][screen2words] | 113.3 | 117.8 | 122.8 | 114.0 | 119.1 | 123.4 |
| [TallyQA][tallyqa] (complex) | 70.3 | 73.4 | 74.2 | 73.6 | 76.7 | 76.8 |
| [TallyQA][tallyqa] (simple) | 81.8 | 83.2 | 83.4 | 85.3 | 86.2 | 85.7 |
| [TextCaps][textcaps] | 127.5 | 137.9 | 139.9 | 152.1 | 157.7 | 153.6 |
| [TextVQA][textvqa] (val) | 59.6 | 64.0 | 64.7 | 75.2 | 76.6 | 76.2 |
| [VATEX][vatex] | 80.8 | 82.7 | - | - | - | - |
| [VQAv2][vqav2] (minival) | 83.0 | 84.3 | 84.5 | 84.8 | 85.8 | 85.8 |
| [VizWizVQA][vizwiz-vqa] (val) | 76.4 | 78.1 | 78.7 | 77.5 | 78.6 | 78.9 |
| [WidgetCap][widgetcap] | 138.1 | 139.8 | 138.8 | 151.4 | 151.9 | 148.9 |
| [XM3600][xm3600] (avg35) | 42.8 | 44.5 | 45.2 | 43.2 | 44.6 | 45.2 |
| [XM3600][xm3600] (en) | 79.8 | 80.7 | 81.0 | 80.3 | 81.5 | 81.0 |
| [xGQA][xgqa] (avg7) | 58.6 | 61.4 | 61.1 | 60.4 | 62.6 | 62.1 |
#### Additional Benchmarks
**[ICDAR 2015 Incidental][icdar2015-inc]**
| Model | Precision | Recall | F1 |
|-----------------|-----------|:------:|:-----:|
| PaliGemma 2 3B | 81.88 | 70.73 | 75.9 |
**[Total-Text][total-text]**
| Model | Precision | Recall | F1 |
|-----------------|-----------|:------:|:-----:|
| PaliGemma 2 3B | 73.8. | 74.54 | 74.17 |
**[FinTabNet][fintabnet]**
| Model | S-TEDS | TEDS | GriTS-Top | GriTS-Con |
|-----------------|--------|-------|-----------|-----------|
| PaliGemma 2 3B | 99.18 | 98.94 | 99.43 | 99.21 |
**[PubTabNet][pubtabnet]**
| Model | S-TEDS | TEDS | GriTS-Top | GriTS-Con |
|-----------------|--------|-------|-----------|-----------|
| PaliGemma 2 3B | 97.6 | 97.31 | 97.99 | 97.84 |
**[GrandStaff][grandstaff]**
| Model | CER | LER | SER |
|-----------------|-----|-----|-----|
| PaliGemma 2 3B | 1.6 | 6.7 | 2.3 |
**[PubChem][pubchem]**
* PaliGemma 2 3B, Full Match: 94.8
**[DOCCI][docci]**
| Model | avg#char | avg#sent | NES % |
|-----------------|----------|----------|---------|
| PaliGemma 2 3B | 529 | 7.74 | 28.42 |
| PaliGemma 2 10B | 521 | 7.45 | 20.27 |
- *avg#char*: Average number of characters
- *avg#sent*: Average number of sentences
- *NES*: Non entailment sentences
**[MIMIC-CXR][mimic-cxr]**
| Model | CIDEr | BLEU4 | Rouge-L | RadGraph F1 |
|-----------------|-------|-------|---------|-------------|
| PaliGemma 2 3B | 19.9% | 14.6% | 31.92% | 28.8% |
| PaliGemma 2 10B | 17.4% | 15% | 32.41% | 29.5% |
**[Visual Spatial Reasoning][vsr]**
| Model | VSR zeroshot split (test) | VSR random split (test) |
|-----------------|---------------------------|--------------------------|
| PaliGemma 2 3B | 0.75 | 0.82 |
| PaliGemma 2 10B | 0.80 | 0.87 |
## Ethics and safety
### Evaluation approach
Our evaluation methods include structured ethics and safety evaluations across
relevant content policies, including:
* Human evaluation on prompts covering child safety, content safety and
representational harms. See the [Gemma model
card](https://ai.google.dev/gemma/docs/model_card#evaluation_approach) for
more details on evaluation approach, but with image captioning and visual
question answering setups.
* Image-to-Text benchmark evaluation: Benchmark against relevant academic
datasets such as FairFace Dataset ([Karkkainen et al.,
2021](https://arxiv.org/abs/1908.04913)).
### Evaluation results
* The human evaluation results of ethics and safety evaluations are within
acceptable thresholds for meeting [internal
policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11)
for categories such as child safety, content safety and representational
harms.
* On top of robust internal evaluations, we also use the Perspective API
(threshold of 0.8) to measure toxicity, profanity, and other potential
issues in the generated captions for images sourced from the FairFace
dataset. We report the maximum and median values observed across subgroups
for each of the perceived gender, ethnicity, and age attributes.
<table>
<tr>
<col>
<colgroup span="3"></colgroup>
<colgroup span="3"></colgroup>
<colgroup span="3"></colgroup>
<th>Metric</th>
<th colspan="3" scope="colgroup">Perceived gender</th>
<th colspan="3" scope="colgroup">Ethnicity</th>
<th colspan="3" scope="colgroup">Age group</th>
</tr>
<tr>
<th>Model size</th>
<th scope="col">3B</th>
<th scope="col">10B</th>
<th scope="col">28B</th>
<th scope="col">3B</th>
<th scope="col">10B</th>
<th scope="col">28B</th>
<th scope="col">3B</th>
<th scope="col">10B</th>
<th scope="col">28B</th>
</tr>
<tr>
<th></th>
<th colspan="9" scope="colgroup">Maximum</th>
</tr>
<tr>
<td>Toxicity</td>
<td>0.14%</td>
<td>0.15%</td>
<td>0.19%</td>
<td>0.29%</td>
<td>0.39%</td>
<td>0.39%</td>
<td>0.26%</td>
<td>0.18%</td>
<td>0.32%</td>
</tr>
<tr>
<td>Identity Attack</td>
<td>0.04%</td>
<td>0.02%</td>
<td>0.02%</td>
<td>0.13%</td>
<td>0.06%</td>
<td>0.06%</td>
<td>0.06%</td>
<td>0.03%</td>
<td>0.06%</td>
</tr>
<tr>
<td>Insult</td>
<td>0.17%</td>
<td>0.25%</td>
<td>0.17%</td>
<td>0.37%</td>
<td>0.52%</td>
<td>0.52%</td>
<td>0.27%</td>
<td>0.39%</td>
<td>0.24%</td>
</tr>
<tr>
<td>Threat</td>
<td>0.55%</td>
<td>0.43%</td>
<td>0.57%</td>
<td>0.83%</td>
<td>0.48%</td>
<td>0.48%</td>
<td>0.64%</td>
<td>0.43%</td>
<td>0.64%</td>
</tr>
<tr>
<td>Profanity</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
</tr>
<tr>
<th></th>
<th colspan="9" scope="colgroup">Median</th>
</tr>
<tr>
<td>Toxicity</td>
<td>0.13%</td>
<td>0.10%</td>
<td>0.18%</td>
<td>0.07%</td>
<td>0.07%</td>
<td>0.14%</td>
<td>0.12%</td>
<td>0.08%</td>
<td>0.12%</td>
</tr>
<tr>
<td>Identity Attack</td>
<td>0.02%</td>
<td>0.01%</td>
<td>0.02%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Insult</td>
<td>0.15%</td>
<td>0.23%</td>
<td>0.14%</td>
<td>0.14%</td>
<td>0.17%</td>
<td>0.13%</td>
<td>0.09%</td>
<td>0.18%</td>
<td>0.16%</td>
</tr>
<tr>
<td>Threat</td>
<td>0.35%</td>
<td>0.27%</td>
<td>0.41%</td>
<td>0.28%</td>
<td>0.19%</td>
<td>0.42%</td>
<td>0.27%</td>
<td>0.31%</td>
<td>0.40%</td>
</tr>
<tr>
<td>Profanity</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
</tr>
</table>
## Usage and limitations
### Intended usage
Open Vision Language Models (VLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
Fine-tune on specific vision-language task:
* The pre-trained models can be fine-tuned on a wide range of vision-language
tasks such as: image captioning, short video caption, visual question
answering, text reading, object detection and object segmentation.
* The pre-trained models can be fine-tuned for specific domains such as remote
sensing question answering, visual questions from people who are blind,
science question answering, describe UI element functionalities.
* The pre-trained models can be fine-tuned for tasks with non-textual outputs
such as bounding boxes or segmentation masks.
Vision-language research:
* The pre-trained models and fine-tuned models can serve as a foundation for
researchers to experiment with VLM techniques, develop algorithms, and
contribute to the advancement of the field.
### Ethical considerations and risks
The development of vision-language models (VLMs) raises several ethical
concerns. In creating an open model, we have carefully considered the following:
* Bias and Fairness
* VLMs trained on large-scale, real-world image-text data can reflect
socio-cultural biases embedded in the training material. These models
underwent careful scrutiny, input data pre-processing described and
posterior evaluations reported in this card.
* Misinformation and Misuse
* VLMs can be misused to generate text that is false, misleading, or
harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](https://ai.google.dev/responsible).
* Transparency and Accountability
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making VLM technology accessible to developers and
researchers across the AI ecosystem.
Risks identified and mitigations:
* **Perpetuation of biases:** It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* **Generation of harmful content:** Mechanisms and guidelines for content
safety are essential. Developers are encouraged to exercise caution and
implement appropriate content safety safeguards based on their specific
product policies and application use cases.
* **Misuse for malicious purposes:** Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided: see the [Responsible Generative AI Toolkit](https://ai.google.dev/responsible).
Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* **Privacy violations:** Models were trained on data filtered to remove
certain personal information and sensitive data. Developers are encouraged
to adhere to privacy regulations with privacy-preserving techniques.
### Limitations
* Most limitations inherited from the underlying Gemma 2 models still apply:
* VLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* Natural language is inherently complex. VLMs might struggle to grasp
subtle nuances, sarcasm, or figurative language.
* VLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* VLMs rely on statistical patterns in language and images. They might
lack the ability to apply common sense reasoning in certain situations.
* PaliGemma 2 was designed first and foremost to serve as a general
pre-trained model for fine-tuning to specialized tasks. Hence, its "out of
the box" or "zero-shot" performance might lag behind models designed
specifically for general purpose use.
* PaliGemma 2 is not a multi-turn chatbot. It is designed for a single round
of image and text input.
[ai2d]: https://allenai.org/data/diagrams
[aokvqa-da]: https://allenai.org/project/a-okvqa/home
[aokvqa-mc]: https://allenai.org/project/a-okvqa/home
[anet-cap]: https://paperswithcode.com/dataset/activitynet-captions
[anet-qa]: https://arxiv.org/abs/1906.02467
[chartqa]: https://arxiv.org/abs/2203.10244
[coco-35l]: https://arxiv.org/pdf/2205.12522
[coco-cap]: https://cocodataset.org/#home
[countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/
[docvqa]: https://www.docvqa.org/
[gqa]: https://cs.stanford.edu/people/dorarad/gqa/about.html
[info-vqa]: https://arxiv.org/abs/2104.12756
[marvl]: https://marvl-challenge.github.io/
[msrvtt]: https://paperswithcode.com/dataset/msr-vtt
[msvd-qa]: https://paperswithcode.com/dataset/msvd-qa
[nlvr2]: https://lil.nlp.cornell.edu/nlvr/
[nocaps]: https://nocaps.org/
[ocr-vqa]: https://ocr-vqa.github.io/
[okvqa]: https://okvqa.allenai.org/
[refcoco]: https://arxiv.org/abs/1608.00272
[refcoco+]: https://aclanthology.org/D14-1086
[refcocog]: https://arxiv.org/abs/1511.02283
[rsvqa-hr]: https://zenodo.org/records/6344367
[rsvqa-lr]: https://zenodo.org/records/6344334
[st-vqa]: https://arxiv.org/abs/1905.13648
[scicap]: https://arxiv.org/abs/2110.11624
[scienceqa]: https://scienceqa.github.io/
[screen2words]: https://arxiv.org/abs/2108.03353
[tallyqa]: https://arxiv.org/abs/1810.12440
[textcaps]: https://textvqa.org/textcaps/
[textvqa]: https://textvqa.org/
[vatex]: https://arxiv.org/abs/1904.03493
[vizwiz-vqa]: https://vizwiz.org/tasks-and-datasets/vqa/
[widgetcap]: https://arxiv.org/abs/2010.04295
[vqav2]: https://visualqa.org/index.html
[xgqa]: https://aclanthology.org/2022.findings-acl.196/
[xm3600]: https://arxiv.org/pdf/2205.12522
[icdar2015-inc]: https://arxiv.org/abs/1511.09207
[total-text]: https://paperswithcode.com/paper/total-text-a-comprehensive-dataset-for-scene
[fintabnet]: https://developer.ibm.com/data/fintabnet/
[pubtabnet]: https://paperswithcode.com/dataset/pubtabnet
[grandstaff]: https://link.springer.com/article/10.1007/s10032-023-00432-z
[pubchem]: https://pmc.ncbi.nlm.nih.gov/articles/PMC7352161/
[docci]: https://research.google/pubs/docci-descriptions-of-connected-and-contrasting-images/
[mimic-cxr]: https://paperswithcode.com/dataset/mimic-cxr
[vsr]: https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00566/116470/Visual-Spatial-Reasoning
|
{"library_name": "transformers", "license": "gemma", "pipeline_tag": "image-text-to-text", "extra_gated_heading": "Access PaliGemma on Hugging Face", "extra_gated_prompt": "To access PaliGemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately.", "extra_gated_button_content": "Acknowledge license"}
|
task
|
[
"QUESTION_ANSWERING",
"TRANSLATION"
] | 42,333 |
martimfasantos/dpo-sum-10p-shuff-2e-7-full-from-sft-tiny-3epochs
|
martimfasantos
|
summarization
|
[
"safetensors",
"llama",
"summarization",
"generated_from_trainer",
"base_model:martimfasantos/TinyLlama-1.1B-Sum-SFT",
"base_model:finetune:martimfasantos/TinyLlama-1.1B-Sum-SFT",
"license:apache-2.0",
"region:us"
] | 2024-08-19T20:51:37Z |
2024-08-19T20:52:56+00:00
| 15 | 0 |
---
base_model: martimfasantos/sft-sum-chosen-10lp-shuff-full-tiny_same_params
license: apache-2.0
tags:
- summarization
- generated_from_trainer
model-index:
- name: dpo-sum-10p-shuff-2e-7-full-from-sft-tiny-3epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dpo-sum-10p-shuff-2e-7-full-from-sft-tiny-3epochs
This model is a fine-tuned version of [martimfasantos/sft-sum-chosen-10lp-shuff-full-tiny_same_params](https://huggingface.co/martimfasantos/sft-sum-chosen-10lp-shuff-full-tiny_same_params) on the openai/summarize_from_feedback dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-07
- train_batch_size: 1
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.43.3
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dpo-sum-10p-shuff-2e-7-full-from-sft-tiny-3epochs
This model is a fine-tuned version of [martimfasantos/sft-sum-chosen-10lp-shuff-full-tiny_same_params](https://huggingface.co/martimfasantos/sft-sum-chosen-10lp-shuff-full-tiny_same_params) on the openai/summarize_from_feedback dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-07
- train_batch_size: 1
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.43.3
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
{"base_model": "martimfasantos/sft-sum-chosen-10lp-shuff-full-tiny_same_params", "license": "apache-2.0", "tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "dpo-sum-10p-shuff-2e-7-full-from-sft-tiny-3epochs", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 42,334 |
sail/Sailor-7B-Chat
|
sail
|
text-generation
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"multilingual",
"sea",
"sailor",
"sft",
"chat",
"instruction",
"conversational",
"en",
"zh",
"id",
"th",
"vi",
"ms",
"lo",
"dataset:CohereForAI/aya_dataset",
"dataset:CohereForAI/aya_collection",
"dataset:Open-Orca/OpenOrca",
"arxiv:2404.03608",
"base_model:sail/Sailor-7B",
"base_model:finetune:sail/Sailor-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-03-02T09:48:59Z |
2024-12-21T10:39:52+00:00
| 153 | 7 |
---
base_model: sail/Sailor-7B
datasets:
- CohereForAI/aya_dataset
- CohereForAI/aya_collection
- Open-Orca/OpenOrca
language:
- en
- zh
- id
- th
- vi
- ms
- lo
license: apache-2.0
tags:
- multilingual
- sea
- sailor
- sft
- chat
- instruction
widget:
- text: 如何制作烤鱼?
example_title: Chinese
- text: How to bake fish?
example_title: English
- text: Bagaimana cara memanggang ikan?
example_title: Malay
- text: วิธีย่างปลา?
example_title: Thai
- text: Bagaimana membuat bakaran ikan?
example_title: Indonesian
- text: Làm thế nào để nướng cá?
example_title: Vietnamese
---
<div align="center">
<img src="banner_sailor.jpg" width="700"/>
</div>
Sailor is a suite of Open Language Models tailored for South-East Asia (SEA), focusing on languages such as 🇮🇩Indonesian, 🇹🇭Thai, 🇻🇳Vietnamese, 🇲🇾Malay, and 🇱🇦Lao.
Developed with careful data curation, Sailor models are designed to understand and generate text across diverse linguistic landscapes of SEA region.
Built from [Qwen 1.5](https://huggingface.co/collections/Qwen/qwen15-65c0a2f577b1ecb76d786524) , Sailor encompasses models of varying sizes, spanning from 0.5B to 14B versions for different requirements.
We further fine-tune the base model with open-source datasets to get instruction-tuned models, namedly Sailor-Chat.
Benchmarking results demonstrate Sailor's proficiency in tasks such as question answering, commonsense reasoning, and other tasks in SEA languages.
> The logo was generated by MidJourney
## Model Summary
- **Model Collections:** [Base Model & Chat Model](https://huggingface.co/collections/sail/sailor-65e19a749f978976f1959825)
- **Project Website:** [sea-sailor.github.io/blog/sailor1/](https://sea-sailor.github.io/blog/sailor1/)
- **Codebase:** [github.com/sail-sg/sailor-llm](https://github.com/sail-sg/sailor-llm)
- **Technical Report:** [arxiv.org/pdf/2404.03608.pdf](https://arxiv.org/pdf/2404.03608.pdf)
## Training details
Sailor is crafted by continually pre-training from language models like the remarkable Qwen 1.5 models, which already has a great performance on SEA languages.
The pre-training corpus heavily leverages the publicly available corpus, including
[SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B),
[SkyPile](https://huggingface.co/datasets/Skywork/SkyPile-150B),
[CC100](https://huggingface.co/datasets/cc100) and [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400).
The instruction tuning corpus are all publicly available including
[aya_collection](https://huggingface.co/datasets/CohereForAI/aya_collection),
[aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset),
[OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca).
By employing aggressive data deduplication and careful data cleaning on the collected corpus, we have attained a high-quality dataset spanning various languages.
Through systematic experiments to determine the weights of different languages, Sailor models undergo training from 200B to 400B tokens, tailored to different model sizes.
The approach boosts their performance on SEA languages while maintaining proficiency in English and Chinese without significant compromise.
Finally, we continually pre-train the Qwen1.5-0.5B model with 400 Billion tokens, and other models with 200 Billion tokens to obtain the Sailor models.
## Requirements
The code of Sailor has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`.
## Quickstart
Here provides a code snippet to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda"
model = AutoModelForCausalLM.from_pretrained(
'sail/Sailor-7B-Chat',
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained('sail/Sailor-7B-Chat')
system_prompt= 'You are a helpful assistant'
prompt = "Beri saya pengenalan singkat tentang model bahasa besar."
# prompt = "Hãy cho tôi một giới thiệu ngắn gọn về mô hình ngôn ngữ lớn."
# prompt = "ให้ฉันแนะนำสั้น ๆ เกี่ยวกับโมเดลภาษาขนาดใหญ่"
messages = [
{"role": "system", "content": system_prompt},
{"role": "question", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
input_ids = model_inputs.input_ids.to(device)
generated_ids = model.generate(
input_ids,
max_new_tokens=512,
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
# License
Sailor is distributed under the terms of the Apache License 2.0.
No restrict on the research and the commercial use, but should comply with the [Qwen License](https://huggingface.co/Qwen/Qwen1.5-1.8B/blob/main/LICENSE).
## Citation
If you find sailor useful, please cite our work as follows:
```
@inproceedings{dou-etal-2024-sailor,
title = "Sailor: Open Language Models for South-{E}ast {A}sia",
author = "Dou, Longxu and Liu, Qian and Zeng, Guangtao and Guo, Jia and Zhou, Jiahui and Mao, Xin and Jin, Ziqi and Lu, Wei and Lin, Min",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
year = "2024",
}
```
# Contact Us
If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]) or [[email protected]](mailto:[email protected]).
| null |
Non_BioNLP
|
<div align="center">
<img src="banner_sailor.jpg" width="700"/>
</div>
Sailor is a suite of Open Language Models tailored for South-East Asia (SEA), focusing on languages such as 🇮🇩Indonesian, 🇹🇭Thai, 🇻🇳Vietnamese, 🇲🇾Malay, and 🇱🇦Lao.
Developed with careful data curation, Sailor models are designed to understand and generate text across diverse linguistic landscapes of SEA region.
Built from [Qwen 1.5](https://huggingface.co/collections/Qwen/qwen15-65c0a2f577b1ecb76d786524) , Sailor encompasses models of varying sizes, spanning from 0.5B to 14B versions for different requirements.
We further fine-tune the base model with open-source datasets to get instruction-tuned models, namedly Sailor-Chat.
Benchmarking results demonstrate Sailor's proficiency in tasks such as question answering, commonsense reasoning, and other tasks in SEA languages.
> The logo was generated by MidJourney
## Model Summary
- **Model Collections:** [Base Model & Chat Model](https://huggingface.co/collections/sail/sailor-65e19a749f978976f1959825)
- **Project Website:** [sea-sailor.github.io/blog/sailor1/](https://sea-sailor.github.io/blog/sailor1/)
- **Codebase:** [github.com/sail-sg/sailor-llm](https://github.com/sail-sg/sailor-llm)
- **Technical Report:** [arxiv.org/pdf/2404.03608.pdf](https://arxiv.org/pdf/2404.03608.pdf)
## Training details
Sailor is crafted by continually pre-training from language models like the remarkable Qwen 1.5 models, which already has a great performance on SEA languages.
The pre-training corpus heavily leverages the publicly available corpus, including
[SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B),
[SkyPile](https://huggingface.co/datasets/Skywork/SkyPile-150B),
[CC100](https://huggingface.co/datasets/cc100) and [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400).
The instruction tuning corpus are all publicly available including
[aya_collection](https://huggingface.co/datasets/CohereForAI/aya_collection),
[aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset),
[OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca).
By employing aggressive data deduplication and careful data cleaning on the collected corpus, we have attained a high-quality dataset spanning various languages.
Through systematic experiments to determine the weights of different languages, Sailor models undergo training from 200B to 400B tokens, tailored to different model sizes.
The approach boosts their performance on SEA languages while maintaining proficiency in English and Chinese without significant compromise.
Finally, we continually pre-train the Qwen1.5-0.5B model with 400 Billion tokens, and other models with 200 Billion tokens to obtain the Sailor models.
## Requirements
The code of Sailor has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`.
## Quickstart
Here provides a code snippet to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda"
model = AutoModelForCausalLM.from_pretrained(
'sail/Sailor-7B-Chat',
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained('sail/Sailor-7B-Chat')
system_prompt= 'You are a helpful assistant'
prompt = "Beri saya pengenalan singkat tentang model bahasa besar."
# prompt = "Hãy cho tôi một giới thiệu ngắn gọn về mô hình ngôn ngữ lớn."
# prompt = "ให้ฉันแนะนำสั้น ๆ เกี่ยวกับโมเดลภาษาขนาดใหญ่"
messages = [
{"role": "system", "content": system_prompt},
{"role": "question", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
input_ids = model_inputs.input_ids.to(device)
generated_ids = model.generate(
input_ids,
max_new_tokens=512,
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
# License
Sailor is distributed under the terms of the Apache License 2.0.
No restrict on the research and the commercial use, but should comply with the [Qwen License](https://huggingface.co/Qwen/Qwen1.5-1.8B/blob/main/LICENSE).
## Citation
If you find sailor useful, please cite our work as follows:
```
@inproceedings{dou-etal-2024-sailor,
title = "Sailor: Open Language Models for South-{E}ast {A}sia",
author = "Dou, Longxu and Liu, Qian and Zeng, Guangtao and Guo, Jia and Zhou, Jiahui and Mao, Xin and Jin, Ziqi and Lu, Wei and Lin, Min",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
year = "2024",
}
```
# Contact Us
If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]) or [[email protected]](mailto:[email protected]).
|
{"base_model": "sail/Sailor-7B", "datasets": ["CohereForAI/aya_dataset", "CohereForAI/aya_collection", "Open-Orca/OpenOrca"], "language": ["en", "zh", "id", "th", "vi", "ms", "lo"], "license": "apache-2.0", "tags": ["multilingual", "sea", "sailor", "sft", "chat", "instruction"], "widget": [{"text": "如何制作烤鱼?", "example_title": "Chinese"}, {"text": "How to bake fish?", "example_title": "English"}, {"text": "Bagaimana cara memanggang ikan?", "example_title": "Malay"}, {"text": "วิธีย่างปลา?", "example_title": "Thai"}, {"text": "Bagaimana membuat bakaran ikan?", "example_title": "Indonesian"}, {"text": "Làm thế nào để nướng cá?", "example_title": "Vietnamese"}]}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,335 |
AI-Sweden-Models/gpt-sw3-1.3b-instruct
|
AI-Sweden-Models
|
text-generation
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"da",
"sv",
"no",
"en",
"is",
"dataset:laion/OIG",
"dataset:databricks/databricks-dolly-15k",
"dataset:OpenAssistant/oasst1",
"base_model:AI-Sweden-Models/gpt-sw3-1.3b",
"base_model:finetune:AI-Sweden-Models/gpt-sw3-1.3b",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-04-28T11:40:17Z |
2025-01-07T13:01:40+00:00
| 4,232 | 3 |
---
base_model: AI-Sweden-Models/gpt-sw3-1.3b
datasets:
- laion/OIG
- databricks/databricks-dolly-15k
- OpenAssistant/oasst1
language:
- da
- sv
- 'no'
- en
- is
license: other
---
# Model description
[AI Sweden](https://huggingface.co/AI-Sweden-Models/)
**Base models**
[GPT-Sw3 126M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m/) | [GPT-Sw3 356M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m/) | [GPT-Sw3 1.3B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b/)
[GPT-Sw3 6.7B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b/) | [GPT-Sw3 6.7B v2](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2/) | [GPT-Sw3 20B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b/)
[GPT-Sw3 40B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-40b/)
**Instruct models**
[GPT-Sw3 126M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m-instruct/) | [GPT-Sw3 356M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m-instruct/) | [GPT-Sw3 1.3B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b-instruct/)
[GPT-Sw3 6.7B v2 Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct/) | [GPT-Sw3 20B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct/)
**Quantized models**
[GPT-Sw3 6.7B v2 Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct-4bit-gptq) | [GPT-Sw3 20B Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct-4bit-gptq)
GPT-SW3 is a collection of large decoder-only pretrained transformer language models that were developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language. GPT-SW3 has been trained on a dataset containing 320B tokens in Swedish, Norwegian, Danish, Icelandic, English, and programming code. The model was pretrained using a causal language modeling (CLM) objective utilizing the NeMo Megatron GPT implementation.
The `instruct` models were finetrained on instruction data using both chat and raw text formats.
# Intended use
GPT-SW3 is an autoregressive large language model that is capable of generating coherent text in 5 different languages, and 4 programming languages. GPT-SW3 can also be instructed to perform text tasks that it has not been explicitly trained for, by casting them as text generation tasks.
# Limitations
Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of for example bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: overrepresent some viewpoints and underrepresent others, contain stereotypes, generate hateful, abusive, violent, discriminatory or prejudicial language. The model may make errors, including producing incorrect information as if it were factual, it may generate irrelevant or repetitive outputs, and content that may not be appropriate for all settings, including sexual content.
# How to use
To be able to access the model from Python, since this is a private repository, you have to log in with your access token. This can be done with `huggingface-cli login`, see [HuggingFace Quick Start Guide](https://huggingface.co/docs/huggingface_hub/quick-start#login) for more information.
The following code snippet loads our tokenizer & model, and uses the GPU if available.
```python
import torch
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
# Initialize Variables
model_name = "AI-Sweden-Models/gpt-sw3-1.3b-instruct"
device = "cuda:0" if torch.cuda.is_available() else "cpu"
prompt = "Träd är fina för att"
# Initialize Tokenizer & Model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
model.eval()
model.to(device)
```
Generating text using the `generate` method is done as follows:
```python
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(device)
generated_token_ids = model.generate(
inputs=input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.6,
top_p=1,
)[0]
generated_text = tokenizer.decode(generated_token_ids)
```
The chat format used during data-preprocessing takes the form:
```
<|endoftext|><s>
User:
Jag tycker träd är fina
<s>
Bot:
Kul att du tycker det!
<s>
...
```
The procedure to generate text is the same as before:
```python
prompt = """
<|endoftext|><s>
User:
Varför är träd fina?
<s>
Bot:
""".strip()
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(device)
generated_token_ids = model.generate(
inputs=input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.6,
top_p=1,
)[0]
generated_text = tokenizer.decode(generated_token_ids)
```
Generating text using the `generate` method is done as follows:
```python
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(device)
generated_token_ids = model.generate(
inputs=input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.6,
top_p=1,
)[0]
A convenient alternative to the `generate` method is the HuggingFace pipeline, which handles most of the work for you:
```python
generator = pipeline('text-generation', tokenizer=tokenizer, model=model, device=device)
generated = generator(prompt, max_new_tokens=100, do_sample=True, temperature=0.6, top_p=1)[0]["generated_text"]
```
# Compliance
The release of GPT-SW3 consists of model weights, a configuration file, a tokenizer file and a vocabulary file. None of these files contain any personally identifiable information (PII) or any copyrighted material.
# GPT-SW3 Model Card
Following Mitchell et al. (2018), we provide a model card for GPT-SW3.
# Model Details
- Person or organization developing model: GPT-SW3 was developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language.
- Model date: GPT-SW3 date of release 2022-12-20
- Model version: This is the second generation of GPT-SW3.
- Model type: GPT-SW3 is a large decoder-only transformer language model.
- Information about training algorithms, parameters, fairness constraints or other applied approaches, and features: GPT-SW3 was trained with the NeMo Megatron GPT implementation.
- Paper or other resource for more information: N/A.
- License: [LICENSE](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b-instruct/blob/main/LICENSE).
- Where to send questions or comments about the model: [email protected]
# Intended Use
- Primary intended uses: We pre-release GPT-SW3 for research and evaluation of the capabilities of Large Language Models for the Nordic languages. This is an important step in the process of knowledge building for LLMs, validating the model and collecting feedback on both what works well and what does not.
- Primary intended users: Organizations and individuals in the Nordic NLP ecosystem who can contribute to the validation and testing of the models and provide feedback to the community.
- Out-of-scope use cases: See the modified RAIL license.
# Data, Limitations, and Recommendations
- Data selection for training: Training data for GPT-SW3 was selected based on a combination of breadth and availability. See our Datasheet for more detailed information on the data used to train our model.
- Data selection for evaluation: N/A
- Limitations: Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. In general, GPT-SW3 is not immune from the plethora of issues that plague modern large language models. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: Overrepresent some viewpoints and underrepresent others. Contain stereotypes. Generate: Hateful, abusive, or violent language. Discriminatory or prejudicial language. Content that may not be appropriate for all settings, including sexual content. Make errors, including producing incorrect information as if it were factual. Generate irrelevant or repetitive outputs.
- Recommendations for future work: Indirect users should be made aware when the content they're working with is created by the LLM. Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary. Models pretrained with the LLM should include an updated Model Card. Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
- We hope that the release of GPT-SW3, as well as information around our model training process, will increase open science around both large language models in specific and natural language processing and deep learning in general.
# GPT-SW3 Datasheet
- We follow the recommendations of Gebru et al. (2021) and provide a datasheet for the dataset used to train GPT-SW3.
# Motivation
- For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description. Pre-training of Large Language Models (LLM), such as GPT-3 (T. B. Brown et al., 2020), Gopher (J. W. Rae et al., 2022), BLOOM (T. L. Scao et al., 2022), etc. require 100s or even 1000s GBs of text data, with recent studies (Chinchilla: J. Hoffmann et al., 2022) suggesting that the scale of the training data is even more important than previously imagined. Therefore, in order to train Swedish LLMs, we needed a large scale Swedish dataset of high quality. Since no such datasets existed before this initiative, we collected data in the Nordic and English languages.
- Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)? The Strategic Initiative Natural Language Understanding at AI Sweden has established a new research environment in which collaboration is key. The core team working on the creation of the dataset is the NLU research group at AI Sweden. This group consists of researchers and developers from AI Sweden (Lindholmen Science Park AB) and RISE.
- Who funded the creation of the dataset? If there is an associated grant, please provide the name of the grantor and the grant name and number. The Swedish Innovation Agency (Vinnova) has funded this work across several different grants, including 2019-02996 and 2022-00949.
- Any other comments? No.
# Composition
- What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description. The instances are textual documents categorized by language and document type. The dataset is a filtered and deduplicated collection that includes the following sources:
- Books
- Litteraturbanken (https://litteraturbanken.se/)
- The Pile
- Articles
- Diva (https://www.diva-portal.org/)
- The Pile: PubMed
- The Pile: ArXiv
- Code
- Code Parrot: Github code (https://huggingface.co/datasets/codeparrot/github-code)
- Conversational
- Familjeliv (https://www.familjeliv.se/)
- Flashback (https://flashback.se/)
- Datasets collected through Parlai (see Appendix in data paper for complete list) (https://github.com/facebookresearch/ParlAI)
- Pushshift.io Reddit dataset, developed in Baumgartner et al. (2020) and processed in Roller et al. (2021)
- Math
- English Math dataset generated with code from DeepMind (D. Saxton et al., 2019)
- Swedish Math dataset, generated as above with manually translated templates
- Miscellaneous
- Summarization data (https://www.ida.liu.se/~arnjo82/papers/clarin-21-julius.pdf)
- OPUS, the open parallel corpus (https://opus.nlpl.eu/)
- Movie scripts (https://github.com/Aveek-Saha/Movie-Script-Database)
- Natural Instructions (https://github.com/allenai/natural-instructions)
- P3 (Public Pool of Prompts), (https://huggingface.co/datasets/bigscience/P3)
- The Norwegian Colossal Corpus (https://huggingface.co/datasets/NbAiLab/NCC)
- Danish Gigaword (https://gigaword.dk/)
- Icelandic Gigaword (https://clarin.is/en/resources/gigaword/)
- The Pile: Stack Exchange
- Web Common Crawl
- Web data from the project LES (Linguistic Explorations of Societies, https://les.gu.se).
- Multilingual C4 (MC4), prepared by AllenAI from C4 (C. Raffel et al., 2019)
- Open Super-large Crawled Aggregated coRpus (OSCAR) (P. O. Suarez, 2019)
- The Pile: Open Web Text
- Web Sources
- Various public Swedish website scrapes (see Appendix in data paper)
- Familjeliv Articles
- Public Swedish Job Ads from JobTech/Arbetsförmedlingen
- Wikipedia
- Official Wikipedia dumps
- **Instruction data**:
- [dolly](https://github.com/databrickslabs/dolly/tree/master/data)
- [Open Assistant](https://github.com/LAION-AI/Open-Assistant/blob/main/docs/docs/data/datasets.md)
- [OIG](https://laion.ai/blog/oig-dataset/)
- Fass: Swedish pharmaceutical information, which was transformed into Q&A format.
- How many instances are there in total (of each type, if appropriate)? The training data consists of 1.1TB UTF-8 encoded text, containing 660M documents with a total of 320B tokens.
- Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable). The subset of our dataset that comes from multilingual Common Crawl datasets (MC4, Oscar), are filtered by language to only include Swedish, Norwegian, Danish, and Icelandic. From The Pile, we included only the parts that typically are of highest textual quality or complemented the rest of our dataset with sources we otherwise lacked (e.g. books). The remainder of the dataset was collected from the above sources.
- What data does each instance consist of? “Raw” data (e.g., unprocessed text or images) or features? In either case, please provide a description. Each instance consists of raw text data.
- Is there a label or target associated with each instance? If so, please provide a description. No.
- Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text. No.
- Are relationships between individual instances made explicit (e.g., users’ movie ratings, social network links)? If so, please describe how these relationships are made explicit. There are no explicit relationships between individual instances.
- Are there recommended data splits (e.g., training, development/validation, testing)? If so, please provide a description of these splits, explaining the rationale behind them. There are no explicit splits recommended for this dataset. When pre-training the model, a random split for train, dev, test is set to 99.99%, 0.08%, 0.02% respectively, and is sampled proportionally to each subset’s weight and size. The weight of each subset was manually decided beforehand. These decisions were made considering the data’s value, source, and language, to form a representative and balanced pre-training corpus.
- Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description. The dataset is a collection of many sources, some of which naturally contain some overlap. Although we have performed deduplication, some overlap may still remain. Furthermore, there may be some noise remaining from artifacts originating in Common Crawl datasets, that have been missed by our data filtering process. Except for these, we are not aware of any errors, sources of noise, or redundancies.
- Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? The dataset is self-contained.
- Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? If so, please describe why. The dataset contains subsets of public Common Crawl, Reddit, Familjeliv and Flashback. These could contain sentences that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety.
- Does the dataset relate to people? If not, you may skip the remaining questions in this section. Some documents of this data relate to people, such as news articles, Wikipedia descriptions, etc.
- Does the dataset identify any subpopulations (e.g., by age, gender)? If so, please describe how these subpopulations are identified and provide a description of their respective distributions within the dataset. No, the dataset does not explicitly include subpopulation identification.
- Any other comments? No.
# Collection Process
- How was the data associated with each instance acquired? Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/derived from other data (e.g., part-of-speech tags, model-based guesses for age or language)? If data was reported by subjects or indirectly inferred/derived from other data, was the data validated/verified? If so, please describe how. N/A. The dataset is a union of publicly available datasets and sources.
- What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)? How were these mechanisms or procedures validated? The data was downloaded from the internet.
- If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? Please see previous answers for how parts of the dataset were selected.
- Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)? This data is mined, filtered and sampled by machines.
- Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances (e.g., recent crawl of old news articles)? If not, please describe the timeframe in which the data associated with the instances was created. The dataset was collected during the period June 2021 to June 2022. The creation of the collected sources varies, with e.g. Common Crawl data that have been continuously collected over 12 years.
- Does the dataset relate to people? If not, you may skip the remainder of the questions in this section. Yes. The texts have been produced by people. Any personal information potentially present in publicly available data sources and thus in the created dataset is of no interest to the collection and use of the dataset.
- Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted? If so, please provide a description of this analysis, including the outcomes, as well as a link or other access point to any supporting documentation. Yes.
- Any other comments? No.
- Preprocessing/cleaning/labeling
- Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? If so, please provide a description. If not, you may skip the remainder of the questions in this section. The dataset was filtered and re-formatted on a document-level using standard procedures, inspired by the work in The BigScience ROOTS Corpus (H. Laurençon et al., 2022) and Gopher (J. W. Rae et al., 2022). This was done with the goal of achieving a consistent text format throughout the dataset, and to remove documents that did not meet our textual quality requirements (e.g. repetitiveness). Furthermore, the dataset was deduplicated to remedy the overlap between collected subsets using the MinHash algorithm, similar to the method used in GPT-3 and The Pile, and described in greater detail in “Deduplicating Training Data Makes Language Models Better” (K. Lee et al., 2021).
**Instruction data**: The processing outlined above was not applied to the instruction data.
Instruction data was turned into chat-turn format and formatted accordingly with an end-of-turn token, as well as unrolled into raw textual form.
The Open Assistant data was also automatically translated using GPT-SW3 into Swedish, Danish, Norwegian, and Icelandic.
- Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? If so, please provide a link or other access point to the “raw” data. The “raw” component datasets are publicly available in their respective locations.
- Any other comments? No.
# Uses
- Has the dataset been used for any tasks already? If so, please provide a description. The dataset was used to pre-train the GPT-SW3 models.
- Is there a repository that links to any or all papers or systems that use the dataset? If so, please provide a link or other access point. N/A.
- What (other) tasks could the dataset be used for? The data can be used to pre-train language models, which are foundations for many current and future language tasks.
- Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? For example, is there anything that a future user might need to know to avoid uses that could result in unfair treatment of individuals or groups (e.g., stereotyping, quality of service issues) or other undesirable harms (e.g., financial harms, legal risks) If so, please provide a description. Is there anything a future user could do to mitigate these undesirable harms? The dataset is probably quite representative of Swedish internet discourse in general, and of the Swedish public sector, but we know that this data does not necessarily reflect the entire Swedish population.
- Are there tasks for which the dataset should not be used? If so, please provide a description. None that we are currently aware of.
- Any other comments? No.
# Distribution
- Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? If so, please provide a description. No.
- How will the dataset distributed (e.g., tarball on website, API, GitHub)? Does the dataset have a digital object identifier (DOI)? N/A.
- When will the dataset be distributed? N/A.
- Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? If so, please describe this license and/or ToU, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms or ToU, as well as any fees associated with these restrictions. N/A.
- Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any supporting documentation. N/A.
- Any other comments? No.
# Maintenance
- Who is supporting/hosting/maintaining the dataset? AI Sweden at Lindholmen Science Park AB.
- How can the owner/curator/manager of the dataset be contacted (e.g., email address)? [email protected]
- Is there an erratum? If so, please provide a link or other access point. N/A.
- Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)? If so, please describe how often, by whom, and how updates will be communicated to users (e.g., mailing list, GitHub)? Currently, there are no plans for updating the dataset.
- If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were individuals in question told that their data would be retained for a fixed period of time and then deleted)? If so, please describe these limits and explain how they will be enforced. Read the privacy policy for the NLU initiative at AI Sweden [here](https://www.ai.se/en/privacy-policy-nlu).
- Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to users. N/A.
- If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? If so, please provide a description. Will these contributions be validated/ verified? If so, please describe how. If not, why not? Is there a process for communicating/ distributing these contributions to other users? If so, please provide a description. Not at this time.
- Any other comments? No.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_AI-Sweden-Models__gpt-sw3-1.3b-instruct)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 30.26 |
| ARC (25-shot) | 30.97 |
| HellaSwag (10-shot) | 51.42 |
| MMLU (5-shot) | 26.17 |
| TruthfulQA (0-shot) | 40.31 |
| Winogrande (5-shot) | 56.75 |
| GSM8K (5-shot) | 1.59 |
| DROP (3-shot) | 4.64 |
| null |
Non_BioNLP
|
# Model description
[AI Sweden](https://huggingface.co/AI-Sweden-Models/)
**Base models**
[GPT-Sw3 126M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m/) | [GPT-Sw3 356M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m/) | [GPT-Sw3 1.3B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b/)
[GPT-Sw3 6.7B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b/) | [GPT-Sw3 6.7B v2](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2/) | [GPT-Sw3 20B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b/)
[GPT-Sw3 40B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-40b/)
**Instruct models**
[GPT-Sw3 126M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m-instruct/) | [GPT-Sw3 356M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m-instruct/) | [GPT-Sw3 1.3B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b-instruct/)
[GPT-Sw3 6.7B v2 Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct/) | [GPT-Sw3 20B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct/)
**Quantized models**
[GPT-Sw3 6.7B v2 Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct-4bit-gptq) | [GPT-Sw3 20B Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct-4bit-gptq)
GPT-SW3 is a collection of large decoder-only pretrained transformer language models that were developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language. GPT-SW3 has been trained on a dataset containing 320B tokens in Swedish, Norwegian, Danish, Icelandic, English, and programming code. The model was pretrained using a causal language modeling (CLM) objective utilizing the NeMo Megatron GPT implementation.
The `instruct` models were finetrained on instruction data using both chat and raw text formats.
# Intended use
GPT-SW3 is an autoregressive large language model that is capable of generating coherent text in 5 different languages, and 4 programming languages. GPT-SW3 can also be instructed to perform text tasks that it has not been explicitly trained for, by casting them as text generation tasks.
# Limitations
Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of for example bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: overrepresent some viewpoints and underrepresent others, contain stereotypes, generate hateful, abusive, violent, discriminatory or prejudicial language. The model may make errors, including producing incorrect information as if it were factual, it may generate irrelevant or repetitive outputs, and content that may not be appropriate for all settings, including sexual content.
# How to use
To be able to access the model from Python, since this is a private repository, you have to log in with your access token. This can be done with `huggingface-cli login`, see [HuggingFace Quick Start Guide](https://huggingface.co/docs/huggingface_hub/quick-start#login) for more information.
The following code snippet loads our tokenizer & model, and uses the GPU if available.
```python
import torch
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
# Initialize Variables
model_name = "AI-Sweden-Models/gpt-sw3-1.3b-instruct"
device = "cuda:0" if torch.cuda.is_available() else "cpu"
prompt = "Träd är fina för att"
# Initialize Tokenizer & Model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
model.eval()
model.to(device)
```
Generating text using the `generate` method is done as follows:
```python
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(device)
generated_token_ids = model.generate(
inputs=input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.6,
top_p=1,
)[0]
generated_text = tokenizer.decode(generated_token_ids)
```
The chat format used during data-preprocessing takes the form:
```
<|endoftext|><s>
User:
Jag tycker träd är fina
<s>
Bot:
Kul att du tycker det!
<s>
...
```
The procedure to generate text is the same as before:
```python
prompt = """
<|endoftext|><s>
User:
Varför är träd fina?
<s>
Bot:
""".strip()
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(device)
generated_token_ids = model.generate(
inputs=input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.6,
top_p=1,
)[0]
generated_text = tokenizer.decode(generated_token_ids)
```
Generating text using the `generate` method is done as follows:
```python
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(device)
generated_token_ids = model.generate(
inputs=input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.6,
top_p=1,
)[0]
A convenient alternative to the `generate` method is the HuggingFace pipeline, which handles most of the work for you:
```python
generator = pipeline('text-generation', tokenizer=tokenizer, model=model, device=device)
generated = generator(prompt, max_new_tokens=100, do_sample=True, temperature=0.6, top_p=1)[0]["generated_text"]
```
# Compliance
The release of GPT-SW3 consists of model weights, a configuration file, a tokenizer file and a vocabulary file. None of these files contain any personally identifiable information (PII) or any copyrighted material.
# GPT-SW3 Model Card
Following Mitchell et al. (2018), we provide a model card for GPT-SW3.
# Model Details
- Person or organization developing model: GPT-SW3 was developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language.
- Model date: GPT-SW3 date of release 2022-12-20
- Model version: This is the second generation of GPT-SW3.
- Model type: GPT-SW3 is a large decoder-only transformer language model.
- Information about training algorithms, parameters, fairness constraints or other applied approaches, and features: GPT-SW3 was trained with the NeMo Megatron GPT implementation.
- Paper or other resource for more information: N/A.
- License: [LICENSE](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b-instruct/blob/main/LICENSE).
- Where to send questions or comments about the model: [email protected]
# Intended Use
- Primary intended uses: We pre-release GPT-SW3 for research and evaluation of the capabilities of Large Language Models for the Nordic languages. This is an important step in the process of knowledge building for LLMs, validating the model and collecting feedback on both what works well and what does not.
- Primary intended users: Organizations and individuals in the Nordic NLP ecosystem who can contribute to the validation and testing of the models and provide feedback to the community.
- Out-of-scope use cases: See the modified RAIL license.
# Data, Limitations, and Recommendations
- Data selection for training: Training data for GPT-SW3 was selected based on a combination of breadth and availability. See our Datasheet for more detailed information on the data used to train our model.
- Data selection for evaluation: N/A
- Limitations: Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. In general, GPT-SW3 is not immune from the plethora of issues that plague modern large language models. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: Overrepresent some viewpoints and underrepresent others. Contain stereotypes. Generate: Hateful, abusive, or violent language. Discriminatory or prejudicial language. Content that may not be appropriate for all settings, including sexual content. Make errors, including producing incorrect information as if it were factual. Generate irrelevant or repetitive outputs.
- Recommendations for future work: Indirect users should be made aware when the content they're working with is created by the LLM. Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary. Models pretrained with the LLM should include an updated Model Card. Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
- We hope that the release of GPT-SW3, as well as information around our model training process, will increase open science around both large language models in specific and natural language processing and deep learning in general.
# GPT-SW3 Datasheet
- We follow the recommendations of Gebru et al. (2021) and provide a datasheet for the dataset used to train GPT-SW3.
# Motivation
- For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description. Pre-training of Large Language Models (LLM), such as GPT-3 (T. B. Brown et al., 2020), Gopher (J. W. Rae et al., 2022), BLOOM (T. L. Scao et al., 2022), etc. require 100s or even 1000s GBs of text data, with recent studies (Chinchilla: J. Hoffmann et al., 2022) suggesting that the scale of the training data is even more important than previously imagined. Therefore, in order to train Swedish LLMs, we needed a large scale Swedish dataset of high quality. Since no such datasets existed before this initiative, we collected data in the Nordic and English languages.
- Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)? The Strategic Initiative Natural Language Understanding at AI Sweden has established a new research environment in which collaboration is key. The core team working on the creation of the dataset is the NLU research group at AI Sweden. This group consists of researchers and developers from AI Sweden (Lindholmen Science Park AB) and RISE.
- Who funded the creation of the dataset? If there is an associated grant, please provide the name of the grantor and the grant name and number. The Swedish Innovation Agency (Vinnova) has funded this work across several different grants, including 2019-02996 and 2022-00949.
- Any other comments? No.
# Composition
- What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description. The instances are textual documents categorized by language and document type. The dataset is a filtered and deduplicated collection that includes the following sources:
- Books
- Litteraturbanken (https://litteraturbanken.se/)
- The Pile
- Articles
- Diva (https://www.diva-portal.org/)
- The Pile: PubMed
- The Pile: ArXiv
- Code
- Code Parrot: Github code (https://huggingface.co/datasets/codeparrot/github-code)
- Conversational
- Familjeliv (https://www.familjeliv.se/)
- Flashback (https://flashback.se/)
- Datasets collected through Parlai (see Appendix in data paper for complete list) (https://github.com/facebookresearch/ParlAI)
- Pushshift.io Reddit dataset, developed in Baumgartner et al. (2020) and processed in Roller et al. (2021)
- Math
- English Math dataset generated with code from DeepMind (D. Saxton et al., 2019)
- Swedish Math dataset, generated as above with manually translated templates
- Miscellaneous
- Summarization data (https://www.ida.liu.se/~arnjo82/papers/clarin-21-julius.pdf)
- OPUS, the open parallel corpus (https://opus.nlpl.eu/)
- Movie scripts (https://github.com/Aveek-Saha/Movie-Script-Database)
- Natural Instructions (https://github.com/allenai/natural-instructions)
- P3 (Public Pool of Prompts), (https://huggingface.co/datasets/bigscience/P3)
- The Norwegian Colossal Corpus (https://huggingface.co/datasets/NbAiLab/NCC)
- Danish Gigaword (https://gigaword.dk/)
- Icelandic Gigaword (https://clarin.is/en/resources/gigaword/)
- The Pile: Stack Exchange
- Web Common Crawl
- Web data from the project LES (Linguistic Explorations of Societies, https://les.gu.se).
- Multilingual C4 (MC4), prepared by AllenAI from C4 (C. Raffel et al., 2019)
- Open Super-large Crawled Aggregated coRpus (OSCAR) (P. O. Suarez, 2019)
- The Pile: Open Web Text
- Web Sources
- Various public Swedish website scrapes (see Appendix in data paper)
- Familjeliv Articles
- Public Swedish Job Ads from JobTech/Arbetsförmedlingen
- Wikipedia
- Official Wikipedia dumps
- **Instruction data**:
- [dolly](https://github.com/databrickslabs/dolly/tree/master/data)
- [Open Assistant](https://github.com/LAION-AI/Open-Assistant/blob/main/docs/docs/data/datasets.md)
- [OIG](https://laion.ai/blog/oig-dataset/)
- Fass: Swedish pharmaceutical information, which was transformed into Q&A format.
- How many instances are there in total (of each type, if appropriate)? The training data consists of 1.1TB UTF-8 encoded text, containing 660M documents with a total of 320B tokens.
- Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable). The subset of our dataset that comes from multilingual Common Crawl datasets (MC4, Oscar), are filtered by language to only include Swedish, Norwegian, Danish, and Icelandic. From The Pile, we included only the parts that typically are of highest textual quality or complemented the rest of our dataset with sources we otherwise lacked (e.g. books). The remainder of the dataset was collected from the above sources.
- What data does each instance consist of? “Raw” data (e.g., unprocessed text or images) or features? In either case, please provide a description. Each instance consists of raw text data.
- Is there a label or target associated with each instance? If so, please provide a description. No.
- Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text. No.
- Are relationships between individual instances made explicit (e.g., users’ movie ratings, social network links)? If so, please describe how these relationships are made explicit. There are no explicit relationships between individual instances.
- Are there recommended data splits (e.g., training, development/validation, testing)? If so, please provide a description of these splits, explaining the rationale behind them. There are no explicit splits recommended for this dataset. When pre-training the model, a random split for train, dev, test is set to 99.99%, 0.08%, 0.02% respectively, and is sampled proportionally to each subset’s weight and size. The weight of each subset was manually decided beforehand. These decisions were made considering the data’s value, source, and language, to form a representative and balanced pre-training corpus.
- Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description. The dataset is a collection of many sources, some of which naturally contain some overlap. Although we have performed deduplication, some overlap may still remain. Furthermore, there may be some noise remaining from artifacts originating in Common Crawl datasets, that have been missed by our data filtering process. Except for these, we are not aware of any errors, sources of noise, or redundancies.
- Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? The dataset is self-contained.
- Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? If so, please describe why. The dataset contains subsets of public Common Crawl, Reddit, Familjeliv and Flashback. These could contain sentences that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety.
- Does the dataset relate to people? If not, you may skip the remaining questions in this section. Some documents of this data relate to people, such as news articles, Wikipedia descriptions, etc.
- Does the dataset identify any subpopulations (e.g., by age, gender)? If so, please describe how these subpopulations are identified and provide a description of their respective distributions within the dataset. No, the dataset does not explicitly include subpopulation identification.
- Any other comments? No.
# Collection Process
- How was the data associated with each instance acquired? Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/derived from other data (e.g., part-of-speech tags, model-based guesses for age or language)? If data was reported by subjects or indirectly inferred/derived from other data, was the data validated/verified? If so, please describe how. N/A. The dataset is a union of publicly available datasets and sources.
- What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)? How were these mechanisms or procedures validated? The data was downloaded from the internet.
- If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? Please see previous answers for how parts of the dataset were selected.
- Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)? This data is mined, filtered and sampled by machines.
- Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances (e.g., recent crawl of old news articles)? If not, please describe the timeframe in which the data associated with the instances was created. The dataset was collected during the period June 2021 to June 2022. The creation of the collected sources varies, with e.g. Common Crawl data that have been continuously collected over 12 years.
- Does the dataset relate to people? If not, you may skip the remainder of the questions in this section. Yes. The texts have been produced by people. Any personal information potentially present in publicly available data sources and thus in the created dataset is of no interest to the collection and use of the dataset.
- Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted? If so, please provide a description of this analysis, including the outcomes, as well as a link or other access point to any supporting documentation. Yes.
- Any other comments? No.
- Preprocessing/cleaning/labeling
- Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? If so, please provide a description. If not, you may skip the remainder of the questions in this section. The dataset was filtered and re-formatted on a document-level using standard procedures, inspired by the work in The BigScience ROOTS Corpus (H. Laurençon et al., 2022) and Gopher (J. W. Rae et al., 2022). This was done with the goal of achieving a consistent text format throughout the dataset, and to remove documents that did not meet our textual quality requirements (e.g. repetitiveness). Furthermore, the dataset was deduplicated to remedy the overlap between collected subsets using the MinHash algorithm, similar to the method used in GPT-3 and The Pile, and described in greater detail in “Deduplicating Training Data Makes Language Models Better” (K. Lee et al., 2021).
**Instruction data**: The processing outlined above was not applied to the instruction data.
Instruction data was turned into chat-turn format and formatted accordingly with an end-of-turn token, as well as unrolled into raw textual form.
The Open Assistant data was also automatically translated using GPT-SW3 into Swedish, Danish, Norwegian, and Icelandic.
- Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? If so, please provide a link or other access point to the “raw” data. The “raw” component datasets are publicly available in their respective locations.
- Any other comments? No.
# Uses
- Has the dataset been used for any tasks already? If so, please provide a description. The dataset was used to pre-train the GPT-SW3 models.
- Is there a repository that links to any or all papers or systems that use the dataset? If so, please provide a link or other access point. N/A.
- What (other) tasks could the dataset be used for? The data can be used to pre-train language models, which are foundations for many current and future language tasks.
- Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? For example, is there anything that a future user might need to know to avoid uses that could result in unfair treatment of individuals or groups (e.g., stereotyping, quality of service issues) or other undesirable harms (e.g., financial harms, legal risks) If so, please provide a description. Is there anything a future user could do to mitigate these undesirable harms? The dataset is probably quite representative of Swedish internet discourse in general, and of the Swedish public sector, but we know that this data does not necessarily reflect the entire Swedish population.
- Are there tasks for which the dataset should not be used? If so, please provide a description. None that we are currently aware of.
- Any other comments? No.
# Distribution
- Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? If so, please provide a description. No.
- How will the dataset distributed (e.g., tarball on website, API, GitHub)? Does the dataset have a digital object identifier (DOI)? N/A.
- When will the dataset be distributed? N/A.
- Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? If so, please describe this license and/or ToU, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms or ToU, as well as any fees associated with these restrictions. N/A.
- Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any supporting documentation. N/A.
- Any other comments? No.
# Maintenance
- Who is supporting/hosting/maintaining the dataset? AI Sweden at Lindholmen Science Park AB.
- How can the owner/curator/manager of the dataset be contacted (e.g., email address)? [email protected]
- Is there an erratum? If so, please provide a link or other access point. N/A.
- Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)? If so, please describe how often, by whom, and how updates will be communicated to users (e.g., mailing list, GitHub)? Currently, there are no plans for updating the dataset.
- If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were individuals in question told that their data would be retained for a fixed period of time and then deleted)? If so, please describe these limits and explain how they will be enforced. Read the privacy policy for the NLU initiative at AI Sweden [here](https://www.ai.se/en/privacy-policy-nlu).
- Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to users. N/A.
- If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? If so, please provide a description. Will these contributions be validated/ verified? If so, please describe how. If not, why not? Is there a process for communicating/ distributing these contributions to other users? If so, please provide a description. Not at this time.
- Any other comments? No.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_AI-Sweden-Models__gpt-sw3-1.3b-instruct)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 30.26 |
| ARC (25-shot) | 30.97 |
| HellaSwag (10-shot) | 51.42 |
| MMLU (5-shot) | 26.17 |
| TruthfulQA (0-shot) | 40.31 |
| Winogrande (5-shot) | 56.75 |
| GSM8K (5-shot) | 1.59 |
| DROP (3-shot) | 4.64 |
|
{"base_model": "AI-Sweden-Models/gpt-sw3-1.3b", "datasets": ["laion/OIG", "databricks/databricks-dolly-15k", "OpenAssistant/oasst1"], "language": ["da", "sv", "no", "en", "is"], "license": "other"}
|
task
|
[
"SUMMARIZATION"
] | 42,336 |
haonan-li/bactrian-vi-bloom-7b1-lora
|
haonan-li
| null |
[
"arxiv:2305.15011",
"license:mit",
"region:us"
] | 2023-06-13T13:27:08Z |
2023-06-13T13:27:20+00:00
| 0 | 0 |
---
license: mit
---
This repo contains a low-rank adapter (LoRA) for BLOOM-7b1
fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca)
and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in Vietnamese.
### Dataset Creation
1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data).
2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023).
3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023).
<h3 align="center">
<img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center">
</h3>
### Training Parameters
The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora).
This version of the weights was trained with the following hyperparameters:
- Epochs: 8
- Batch size: 128
- Cutoff length: 1024
- Learning rate: 3e-4
- Lora _r_: 16
- Lora target modules: query_key_value
That is:
```
python finetune.py \
--base_model='bigscience/bloom-7b1' \
--num_epochs=5 \
--cutoff_len=1024 \
--group_by_length \
--output_dir='./bactrian-vi-bloom-7b1-lora' \
--lora_target_modules='query_key_value' \
--lora_r=16 \
--micro_batch_size=32
```
Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X.
### Discussion of Biases
(1) Translation bias; (2) Potential English-culture bias in the translated dataset.
### Citation Information
```
@misc{li2023bactrianx,
title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation},
author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin},
year={2023},
eprint={2305.15011},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| null |
Non_BioNLP
|
This repo contains a low-rank adapter (LoRA) for BLOOM-7b1
fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca)
and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in Vietnamese.
### Dataset Creation
1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data).
2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023).
3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023).
<h3 align="center">
<img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center">
</h3>
### Training Parameters
The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora).
This version of the weights was trained with the following hyperparameters:
- Epochs: 8
- Batch size: 128
- Cutoff length: 1024
- Learning rate: 3e-4
- Lora _r_: 16
- Lora target modules: query_key_value
That is:
```
python finetune.py \
--base_model='bigscience/bloom-7b1' \
--num_epochs=5 \
--cutoff_len=1024 \
--group_by_length \
--output_dir='./bactrian-vi-bloom-7b1-lora' \
--lora_target_modules='query_key_value' \
--lora_r=16 \
--micro_batch_size=32
```
Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X.
### Discussion of Biases
(1) Translation bias; (2) Potential English-culture bias in the translated dataset.
### Citation Information
```
@misc{li2023bactrianx,
title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation},
author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin},
year={2023},
eprint={2305.15011},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"license": "mit"}
|
task
|
[
"TRANSLATION"
] | 42,337 |
HPLT/sft-fpft-es-bloom-7b1
|
HPLT
|
text-generation
|
[
"transformers",
"pytorch",
"bloom",
"text-generation",
"generation",
"question answering",
"instruction tuning",
"es",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-04-04T20:38:37Z |
2024-04-04T20:51:46+00:00
| 6 | 0 |
---
language:
- es
license: cc-by-nc-4.0
tags:
- generation
- question answering
- instruction tuning
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with full-parameter fine-tuning and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bloom-7b1](https://huggingface.co/bloom-7b1)
* Instruction tuning language: Spanish
* Training method: full-parameter fine-tuning.
* Best checkpoint: best cross-entropy on a validation set, trained for 3 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded using `transformers` library.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/fpft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
| null |
Non_BioNLP
|
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with full-parameter fine-tuning and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bloom-7b1](https://huggingface.co/bloom-7b1)
* Instruction tuning language: Spanish
* Training method: full-parameter fine-tuning.
* Best checkpoint: best cross-entropy on a validation set, trained for 3 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded using `transformers` library.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/fpft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
{"language": ["es"], "license": "cc-by-nc-4.0", "tags": ["generation", "question answering", "instruction tuning"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,338 |
MultiBertGunjanPatrick/multiberts-seed-0-1200k
|
MultiBertGunjanPatrick
| null |
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-0",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2021-10-04T04:57:24+00:00
| 104 | 0 |
---
datasets:
- bookcorpus
- wikipedia
language: en
license: apache-2.0
tags:
- exbert
- multiberts
- multiberts-seed-0
---
# MultiBERTs Seed 0 Checkpoint 1200k (uncased)
Seed 0 intermediate checkpoint 1200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-1200k')
model = BertModel.from_pretrained("multiberts-seed-0-1200k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| null |
Non_BioNLP
|
# MultiBERTs Seed 0 Checkpoint 1200k (uncased)
Seed 0 intermediate checkpoint 1200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-1200k')
model = BertModel.from_pretrained("multiberts-seed-0-1200k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
{"datasets": ["bookcorpus", "wikipedia"], "language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-0"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,339 |
ThinSu/distilbert-base-uncased-finetuned-emotion
|
ThinSu
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-05-27T14:56:08Z |
2024-05-27T15:00:33+00:00
| 4 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.9245
name: Accuracy
- type: f1
value: 0.9246959777283907
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2095
- Accuracy: 0.9245
- F1: 0.9247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8408 | 1.0 | 250 | 0.3063 | 0.9065 | 0.9063 |
| 0.2369 | 2.0 | 500 | 0.2095 | 0.9245 | 0.9247 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2095
- Accuracy: 0.9245
- F1: 0.9247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8408 | 1.0 | 250 | 0.3063 | 0.9065 | 0.9063 |
| 0.2369 | 2.0 | 500 | 0.2095 | 0.9245 | 0.9247 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
{"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.9245, "name": "Accuracy"}, {"type": "f1", "value": 0.9246959777283907, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,340 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.